entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 17
188
| authors
sequence | primary_category
stringlengths 5
18
| categories
sequence | text
stringlengths 2
629k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.04476v1 | 20230710105503 | Nitrogen isotope effects on boron vacancy quantum sensors in hexagonal boron nitride | [
"Kento Sasaki",
"Takashi Taniguchi",
"Kensuke Kobayashi"
] | quant-ph | [
"quant-ph",
"cond-mat.mes-hall",
"cond-mat.mtrl-sci"
] |
AIP/123-QED
[email protected]
Department of Physics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
Research Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba 305-0044, Japan
[email protected]
Department of Physics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
Institute for Physics of Intelligence, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
Trans-scale Quantum Science Institute, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
Recently, there has been growing interest in researching the use of hexagonal boron nitride (hBN) for quantum technologies.
Here we investigate nitrogen isotope effects on boron vacancy (V_B) defects, one of the candidates for quantum sensors, in ^15N isotopically enriched hBN synthesized using metathesis reaction.
The Raman shifts are scaled with the reduced mass, consistent with previous work on boron isotope enrichment.
We obtain nitrogen isotopic composition dependent optically detected magnetic resonance spectra of V_B defects and determine the hyperfine interaction parameter of ^15N spin to be -64 MHz. Our investigation provides a design policy for hBNs for quantum technologies.
Nitrogen isotope effects on boron vacancy quantum sensors in hexagonal boron nitride
Kensuke Kobayashi
August 12, 2023
====================================================================================
Localized electron spins in solids, such as those in color centers or quantum dots, are the promising platform of quantum technologies.
In most cases, they couple with surrounding nuclear spins; thus, controlling the nuclear spins and their influence are essential.
The isotope enrichment technique has great potential to address this issue<cit.>.
For example, the electron spin coherence time can be improved by enriching nuclear-spin-free isotopes<cit.>, or the the electron spin qubit can be labeled by isotopes with low natural composition ratios<cit.>.
To design such isotopically purified platform, it is crucial not only to synthesize isotopically controlled materials but also to estimate the isotopic composition and determine the hyperfine interaction (HFI) parameters of nuclear spins of the isotopes<cit.>.
Recently, it was discovered that electron spins of boron vacancy (V_B) defects in hexagonal boron nitride (hBN) can be used as quantum sensors even at room temperature<cit.>.
A V_B defect has a structure in which a boron atom in hBN is replaced by a vacancy [Fig. <ref>(a)].
Its electron spin is localized around the vacancy site and is significantly affected by the three nearest nitrogen spins.
Stable isotopes of nitrogen are ^14N and ^15N.
The natural composition ratio of ^14N is 99.6%, and ^15N is almost nonexistent (0.4%).
The nuclear spin is one of the major differences between these isotopes.
Since ^15N spin (I=1/2) is only half of ^14N spin (I=1), V_B defects in ^15N isotopically enriched hBN have fewer energy levels than in non-treated hBN.
The higher the occupancy of each level is, the stronger the resonance signal becomes, leading to higher sensitivity.
However, there are few reports on the isotopic enrichment of hBN, most of which are related to boron isotopes<cit.>.
Here we investigate nitrogen isotope-enriched hBN and observe nitrogen isotope effects on V_B defects.
We synthesized the isotopically controlled hBN crystals using metathesis reaction under high pressure<cit.> with commercially available ^15NH_4Cl.
The Raman shifts of the samples are scaled with their reduced mass, which is the effective mass for an equivalent one-body problem of the two-body vibration problem for boron and nitrogen atoms, consistent with previous work on boron isotope enrichment.
We perform optically detected magnetic resonance (ODMR) of V_B defects produced by helium ion implantation and determine the HFI parameter of ^15N spin to be -64 MHz.
The observed significant modification of resonance spectra due to ^15N isotope enrichment will help improve sensitivity, control fidelity, and precise positioning of quantum sensors.
Our investigation provides guidance for the material design of hBNs for quantum technologies.
First, we describe the influence of nitrogen spins on an electron spin (S = 1) of a V_B defect.
In quantum sensing, an external magnetic field of several mT in the direction of the symmetry axis (z) of the V_B defect is often applied <cit.>.
It is helpful to mitigate the sensitivity suppression due to the strain.
In that condition, the spin Hamiltonian can be approximated as <cit.>,
Ĥ ∼ D Ŝ_z^2 + γ_e B_z ·Ŝ_z + ∑_j=1^3 A_zz,(j)Ŝ_z Î_z,(j),
where, Ŝ_z is the electron spin (S=1) operator in the z direction, D is the zero field splitting, γ_e = 28 MHz/mT is the gyromagnetic ratio of the electron spin, B_z is the magnetic field strength, j (= 1,2,3) is a label of nearest-neighbor nitrogen site, A_zz,(j) is the HFI parameter, Î_z,(j) is the nuclear spin operator in the z direction.
Here we ignore the nuclear spin's Zeeman effect and the quadrupole moment<cit.>, which are much smaller than the HFI parameter in the case of the ^14N spin.
In this study, we determine the A_zz of ^15N spin, ^(15N)A_zz, that have vital contributions in this quantum sensing condition.
Next, we show a model of the expected ODMR spectrum.
In the situation when Eq. (<ref>) is valid, both electron and nuclear spins are quantized in the z direction.
The resonance frequency corresponding to the electron spin transition m_S = 0 ↔±1 can be expressed as,
f_±1(m_I,(1),m_I,(2),m_I,(3)) ∼ f_±1,0±∑_j=1^3 A_zz,(j) m_I,(j),
where, f_±1,0 = D ±γ_e B_z is the resonance frequency in the absence of nuclear spins, m_I,(j) is the magnetic quantum number of nuclear spins at site j which can take the values m_I=-1,0,+1 for ^14N spin (m_I=-1/2,+1/2 for ^15N spin).
Assuming that the nuclear spins are unpolarized and that each resonance signal has the same amplitude and line width, the ODMR spectrum is given by
R = 1 - C/N_level∑ L( f_±1(m_I,(1),m_I,(2),m_I,(3)), dν),
where C is the signal amplitude and L(f,dν) is the Lorentzian with a center frequency f and a full width at half maximum dν.
N_level is the number of possible nuclear states of the nearest-neighbor nitrogen spins (m_I,(1),m_I,(2),m_I,(3)), and the summation symbol means summing concerning those states, which will be explained in detail below.
The resonance spectrum [Eq. (<ref>)] of a V_B defect depends on the number n of ^15N among the nearest nitrogen atoms.
We distinguish V_B defects by #n as shown in Figs. <ref>(b–e).
The energy level splittings of these defects are shown in Figs. <ref>(f–i).
Since ^14N spins can take three states (m_I=-1,0,+1), whereas ^15N spins can take only two states (m_I=-1/2,+1/2), N_level of #0, #1, #2 and #3 are 27(=3^3), 18(=3^2×2), 12(=3×2^2), and 8(=2^3), respectively.
To the extent that Eq. (<ref>) is satisfied, all states belonging to m_S=0 and some of the states belonging to m_S=±1 are degenerated.
In the case of m_S=-1 of #0 (#3), there are 7 (4) states whose energies are distinguished by the total nuclear spin quantum number, m_I,tot = ∑_j=1^3 m_I,(j).
Specifically, the degeneracy of energy states m_I,tot = -3, -2, -1, 0, +1, +2, and +3 (-3/2, -1/2, +1/2, and 3/2) are 1, 3, 6, 7, 6, 3, and 1 (1, 3, 3, and 1), respectively [see Figs. <ref>(f) and (i)].
The occupancy of the state with the largest degeneracy is 26% (=7/27) for #0 and 38% (=3/8) for #3.
High occupancy leads to a strong signal, which is advantageous for high sensitivity.
The distances between energy states (=resonance lines) depend on the HFI parameter A_zz of ^14N and ^15N spins.
The gyromagnetic ratio, a the magnitude of the magnetic moment of the spin, is γ_14N = 3.077 kHz/mT for ^14N spin and γ_15N = -4.316 kHz/mT for ^15N spin.
Since the absolute value of the gyromagnetic ratio is about 1.4 times larger for ^15N spin than for ^14N spin, the spectral separation should get larger for ^15N isotopically enriched hBN.
The larger separation would be helpful to suppress the degradation of control fidelity caused by unintentional driving of neighboring resonance lines.
In this work, we will demonstrate nitrogen isotope effects described above, such as a reduced number of resonance lines and enhanced separation, which are advantageous for quantum sensing.
When measuring an ensemble of V_B defects, the signals of #0 to #3 are averaged.
Specifically, the expected ODMR spectrum is given by,
R_tot = P_0 R_0 + P_1 R_1 + P_2 R_2 + P_3 R_3,
where, R_n is the ODMR spectrum of #n [Eq. (<ref>)] and P_n is the fraction of #n in all V_B defects.
When ^15N isotopic composition, p_15, is spatially uniform, then P_0 = (1 - p_15)^3, P_1 = 3(1 - p_15)^2 p_15, P_2 = 3(1 - p_15 ) p_15^2, and P_3 = p_15^3.
In cases where p_15 is other than 0 or 1, the obtained signal is the sum of #0 to #3.
Here we describe the preparation of ^15N isotopically enriched hBN crystal.
We verify the metathesis reaction process under high pressure<cit.> using commercially available ^15NH_4Cl reagents as a raw material; NaBH_4 + ^15NH_4Cl = B^15N + NaCl + 4H_2.
By continuing the above reaction for about 30 hours, we obtained hBN crystals, which are expected to be close to perfect ^15N isotopic composition (hB^15N).
Other hBN single crystals of about 1 mm-sized are obtained by using Ba-BN as a solvent system <cit.>, where hBN sources are grown within the molten solvent through dissolution and precipitation.
In this case, the nitrogen isotope enrichment in the resulting crystals (hB^14+15N) is not 100% because nitrogen in Ba-BN solvents has a natural isotopic composition.
The ^15N isotopic composition of hB^14+15N is determined by secondary ion mass spectrometry (SIMS) to be about 60%.
In addition, hBN crystal with a natural composition ratio (hB^14N) is used for comparison.
From now, we describe the experimental results.
All the measurements in this work are performed at room temperature.
First, we investigate isotope effect on the phonon energy due to changes in the reduced mass, using a Raman microscope (Nanophoton RAMAN-FM-UTM).
In previous works on boron isotope enrichment <cit.>, it has been shown that the phonon energy scales with the square root of the reduced mass.
Figure <ref>(a) shows the obtained Raman scattering spectra.
The sample with a natural composition ratio, hB^14N, has a Raman shift of 1366 cm^-1.
This value is consistent with the previous work <cit.>.
On the other hand, the Raman shifts for hB^14+15N and hB^15N are 1355 cm^-1 and 1347 cm^-1, respectively.
Clearly, the Raman shift decreases with increasing ^15N isotopic composition, i.e., with increasing reduced mass.
To quantitatively evaluate this behavior, we show the relationship between Raman shift and reduced mass in Fig. <ref>(b).
We calculate the reduced masses of hB^14N, hB^14+15N, and hB^15N assuming p_15 as 0%, 60% (SIMS), and 100%, respectively.
By analyzing the result of the Ref. Vuong2017, we obtain,
Δν_r ∼ -537 μ^1/2 + 2691,
where, Δν_r is the Raman shift (unit cm^-1) and μ is the reduced mass (no unit).
The crosses and the solid line in Fig. <ref>(b) are the result of Ref. Vuong2017 and Eq. (<ref>), respectively.
The deviation between them is as small as about 1 cm^-1.
Since our results agree with Eq. (<ref>) within the error of about 2 cm^-1, we confirm that our nitrogen isotope enrichment is successful.
Next, we perform ODMR measurements to obtain ^15N isotope effects on V_B defects.
V_B defects are generated by helium ion implantation (acceleration voltage 30 keV, dose 1×10^15 cm^-2) into flakes cleaved with Scotch tape.
The flakes are attached to silicon substrates (with SiO_2 thickness of 90 nm).
We use the homemade confocal microscope <cit.> with optimized optical filters for the photoluminescence (PL) of V_B defects (750∼1000 nm).
A broadband microwave antenna with a copper wire soldered to a coplanar waveguide is used to mitigate unwanted distortions in the broad resonance spectrum of V_B defects.
A permanent magnet is approached from the lower side of the sample in the direction of the optical (z) axis.
Figure <ref>(a) shows the ODMR spectrum (m_S=0↔-1) of hB^14N at B_z ∼ 40 mT.
The broad signal consists of several closely overlapping Lorentzians.
The solid line is the fitted curve using Eq. (<ref>) with p_15 = 0.
It reproduces the experimental result well.
The parameters obtained by this fitting are f_-,0 = 2312 MHz, C = 5.6%, dν = 47 MHz, and ^(14N)A_zz = 43 MHz.
The obtained HFI parameter of ^14N spin is consistent with the values of previous works <cit.> within a typical error of few MHz.
Generally, it is impossible to determine the sign of HFI parameter sign from this fitting.
From the positive zero-field splitting of the ground state <cit.> and the spectral change at the ground state level anticrossing <cit.>, we determine that the sign of ^(14N)A_zz is positive.
Note that C and dν depend on the measurement conditions, such as laser power and microwave amplitude.
Next, we show the result of hB^15N in Fig. <ref>(c).
The resonance spectrum clearly consists of four dips, and their separation is larger than in hB^14N.
These are the nitrogen isotope effects on V_B defects.
The solid line is the fitted curve using Eq. (<ref>) with p_15 = 1 and reproduces the experimental result well.
The parameters obtained by this fitting are f_-,0 = 2308 MHz, C = 11%, and dν = 51 MHz, ^(15N)A_zz = ±64 MHz.
The obtained HFI parameter of ^15N spins, ^(15N)A_zz, is ±1.4 times larger than the ^(14N)A_zz obtained above.
It is reasonable considering the ratio of the gyromagnetic ratio of ^14N and ^15N spins.
Our observation supports a number of benefits of ^15N isotope enrichment we expected above, including increased sensitivity and control fidelity.
This is the central result of this work.
In addition, we measure hB^14+15N and obtained that measured spectrum is consistent with the fitting using the HFI parameters and p_15 = 0.6 [Fig. <ref>(b)].
The reason there are only slight undulations in the spectrum is that it contains all signals from #0 to #3 [see Fig. <ref>].
High isotopic composition is necessary to obtain isotope effects useful for quantum sensing.
Finally, to determine the sign of ^(15N)A_zz, we use dynamic nuclear polarization at the excited state level anticrossing (B_z ∼ 70 mT)<cit.>.
In this situation, the angular momentum of the optically polarized electron spins in V_B defects are transferred to the nuclear spins by flip-flops in the excited state; the nuclear spin polarization increases positively <cit.> independent of nitrogen isotopes.
Figure <ref>(d) is the ODMR spectrum of hB^15N at the magnetic field where we observe the largest polarization.
Compared to the Fig. <ref>(c), there is clearly an increase in the signal on the high-frequency side and a decrease in the signal on the low-frequency side.
The polarization of ^15N spins estimated from the area of spectra <cit.> is 16%.
Since the it is enhanced to 27% when the laser power is increased from 0.6 mW [Fig. <ref>(d)] to 5 mW [Fig. <ref>(e)], we conclude that this behavior is the result of optical polarization.
The trend of the observed change in resonance is opposite to that of conventional samples with natural composition ratios <cit.>.
It means that the sign of the HFI parameter is opposite to ^(14N)A_zz, i.e., ^(15N)A_zz=-64MHz, which is consistent with the different signs of the gyromagnetic ratio of ^14N and ^15N spin.
In this work, we examine nitrogen isotope effects on V_B defects in nitrogen isotopically enriched hBN.
We measure ^15N isotopically enriched hBN crystals synthesized using metathesis reaction under high pressure<cit.>.
In the hBN crystals with different ^15N isotope composition, an isotope effect on phonon energy due to changes in the reduced mass are confirmed.
The HFI parameter of ^15N spin is determined to be -64 MHz from the fitting of ODMR spectra of V_B defects produced by helium ion implantation.
The demonstrated uncomplicated spectrum of hB^15N is beneficial to achieve the high sensitivity.
Further, when combined with ^10B isotope enrichment techniques, the sensitivity will be optimized by improving the coherence properties of V_B defects<cit.>.
Sensor labeling with nitrogen isotopes may enable us to identify multiple sensor locations within a device stacked with two-dimensional materials.
The increased control fidelity and distinct optical polarization resulting from enhanced spectral separation would also make hB^15N useful as a polarization agent <cit.> and platform for quantum information processing.
Furthermore, nitrogen isotope enrichment of hBN is essential in studying color centers other than V_B defects, such as carbon-related defects<cit.>.
Our investigation, which reveals nitrogen isotope effects, is a vital step toward the design of hBN for quantum technologies.
We thank Kenji Watanabe (NIMS) for material preparation and Shu Nakaharai (TUT) for useful discussion, Kohei M. Itoh (Keio) for letting us to use the confocal microscope system, and Ryota Akiyama (UTokyo) for supporting Raman measurement.
This work was partially supported by “Advanced Research Infrastructure for Materials and Nanotechnology in Japan (ARIM)" (Proposal No. JPMXP1222UT1131) of the Ministry of Education, Culture, Sports, Science and Technology of Japan (MEXT), “World Premier International Research Center Initiative on Materials Nanoarchitectonics (WPI-MANA)" supported by MEXT.
This work was supported by Grants-in-Aid for Scientific Research (KAKEN) Nos. JP22K03524, JP19H00656, JP19H05826, JP23H01103, and JP23H02052, and Next Generation Artificial Intelligence Research Center at the University of Tokyo.
36
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Itoh and Watanabe(2014)]Itoh2014
author author K. M. Itoh and author H. Watanabe, title title Isotope
engineering of silicon and diamond for quantum computing and sensing
applications, https://doi.org/10.1557/mrc.2014.32 journal journal MRS Communications volume 4, pages 143–157 (year
2014)NoStop
[Balasubramanian et al.(2009)Balasubramanian, Neumann, Twitchen,
Markham, Kolesov, Mizuochi,
Isoya, Achard, Beck,
Tissler, Jacques, Hemmer,
Jelezko, and Wrachtrup]Balasubramanian2009
author author G. Balasubramanian, author P. Neumann, author D. Twitchen,
author M. Markham, author R. Kolesov, author
N. Mizuochi, author
J. Isoya, author J. Achard, author J. Beck, author J. Tissler, author V. Jacques,
author P. R. Hemmer, author F. Jelezko, and author
J. Wrachtrup, title title Ultralong spin coherence time in isotopically engineered
diamond, https://doi.org/10.1038/nmat2420 journal
journal Nature Materials volume 8, pages 383–387 (year 2009)NoStop
[Ishikawa et al.(2012)Ishikawa, Fu, Santori, Acosta, Beausoleil, Watanabe, Shikata, and Itoh]Ishikawa2012
author author T. Ishikawa, author K.-M. C. Fu,
author C. Santori, author V. M. Acosta, author
R. G. Beausoleil, author
H. Watanabe, author
S. Shikata, and author
K. M. Itoh, title title Optical and spin coherence properties of nitrogen-vacancy
centers placed in a 100 nm thick isotopically purified diamond layer, https://doi.org/10.1021/nl300350r journal journal Nano Letters volume 12, pages 2083–2087 (year 2012)NoStop
[Ohashi et al.(2013)Ohashi,
Rosskopf, Watanabe, Loretz,
Tao, Hauert, Tomizawa,
Ishikawa, Ishi-Hayase, Shikata, Degen, and Itoh]Ohashi2013
author author K. Ohashi, author T. Rosskopf,
author H. Watanabe, author M. Loretz, author
Y. Tao, author R. Hauert, author S. Tomizawa, author T. Ishikawa, author J. Ishi-Hayase, author S. Shikata, author C. L. Degen, and author K. M. Itoh, title title
Negatively charged nitrogen-vacancy centers in a 5 nm thin ^12C
diamond film, https://doi.org/10.1021/nl402286v journal journal Nano Letters volume
13, pages 4733–4738 (year 2013)NoStop
[Muhonen et al.(2014)Muhonen, Dehollain, Laucht, Hudson, Kalra, Sekiguchi, Itoh, Jamieson, McCallum, Dzurak, and Morello]Muhonen2014
author author J. T. Muhonen, author J. P. Dehollain, author A. Laucht,
author F. E. Hudson, author R. Kalra, author
T. Sekiguchi, author
K. M. Itoh, author
D. N. Jamieson, author
J. C. McCallum, author
A. S. Dzurak, and author
A. Morello, title title Storing quantum information for 30 seconds in a
nanoelectronic device, https://doi.org/10.1038/nnano.2014.211
journal journal Nature Nanotechnology volume 9, pages 986–991 (year
2014)NoStop
[Veldhorst et al.(2014)Veldhorst, Hwang, Yang, Leenstra, de Ronde, Dehollain,
Muhonen, Hudson, Itoh,
Morello, and Dzurak]Veldhorst2014
author author M. Veldhorst, author J. C. C. Hwang, author C. H. Yang,
author A. W. Leenstra, author B. de Ronde, author
J. P. Dehollain, author
J. T. Muhonen, author
F. E. Hudson, author
K. M. Itoh, author
A. Morello, and author
A. S. Dzurak, title
title An addressable quantum dot qubit with
fault-tolerant control-fidelity, https://doi.org/10.1038/nnano.2014.216 journal journal Nature Nanotechnology volume 9, pages 981–985 (year 2014)NoStop
[Kleinsasser et al.(2016)Kleinsasser, Stanfield, Banks,
Zhu, Li, Acosta,
Watanabe, Itoh, and Fu]Kleinsasser2016
author author E. E. Kleinsasser, author M. M. Stanfield, author J. K. Q. Banks, author Z. Zhu, author W.-D. Li, author
V. M. Acosta, author
H. Watanabe, author
K. M. Itoh, and author
K.-M. C. Fu, title title High density nitrogen-vacancy sensing surface created via
He^+ ion implantation of ^12C diamond, https://doi.org/10.1063/1.4949357 journal journal Applied Physics Letters volume 108, pages 202401 (year 2016)NoStop
[Rabeau et al.(2006)Rabeau,
Reichart, Tamanyan, Jamieson,
Prawer, Jelezko, Gaebel,
Popa, Domhan, and Wrachtrup]Rabeau2006
author author J. R. Rabeau, author P. Reichart,
author G. Tamanyan, author D. N. Jamieson, author
S. Prawer, author F. Jelezko, author T. Gaebel, author I. Popa, author M. Domhan, and author J. Wrachtrup, title title Implantation
of labelled single nitrogen vacancy centers in diamond using ^15N, https://doi.org/10.1063/1.2158700 journal journal Applied Physics Letters volume 88, pages 023113 (year 2006)NoStop
[van Dam et al.(2019)van
Dam, Walsh, Degen, Bersin,
Mouradian, Galiullin, Ruf,
IJspeert, Taminiau, Hanson, and Englund]vanDam2019
author author S. B. van Dam, author M. Walsh,
author M. J. Degen, author E. Bersin, author
S. L. Mouradian, author
A. Galiullin, author
M. Ruf, author M. IJspeert, author T. H. Taminiau, author R. Hanson, and author D. R. Englund, title title Optical
coherence of diamond nitrogen-vacancy centers formed by ion implantation and
annealing, https://doi.org/10.1103/physrevb.99.161203 journal journal Physical Review B volume 99, pages 161203 (year
2019)NoStop
[Gottscholl et al.(2020)Gottscholl, Kianinia, Soltamov,
Orlinskii, Mamin, Bradac,
Kasper, Krambrock, Sperlich,
Toth, Aharonovich, and Dyakonov]Gottscholl2020
author author A. Gottscholl, author M. Kianinia, author V. Soltamov,
author S. Orlinskii, author G. Mamin, author
C. Bradac, author C. Kasper, author K. Krambrock, author A. Sperlich, author M. Toth, author I. Aharonovich, and author V. Dyakonov, title title Initialization
and read-out of intrinsic spin defects in a van der Waals crystal at room
temperature, https://doi.org/10.1038/s41563-020-0619-6 journal journal Nature Materials volume 19, pages 540–545 (year
2020)NoStop
[Gottscholl et al.(2021)Gottscholl, Diez, Soltamov, Kasper, Krauße, Sperlich, Kianinia, Bradac, Aharonovich, and Dyakonov]Gottscholl2021
author author A. Gottscholl, author M. Diez,
author V. Soltamov, author C. Kasper, author
D. Krauße, author
A. Sperlich, author
M. Kianinia, author
C. Bradac, author I. Aharonovich, and author V. Dyakonov, title title Spin defects in hBN as promising temperature, pressure and
magnetic field quantum sensors, https://doi.org/10.1038/s41467-021-24725-1 journal journal Nature Communications volume 12, pages 4480 (year 2021)NoStop
[Huang et al.(2022)Huang,
Zhou, Chen, Lu, McLaughlin, Li, Alghamdi, Djugba, Shi, Wang, and Du]Huang2022
author author M. Huang, author J. Zhou,
author D. Chen, author
H. Lu, author N. J. McLaughlin, author S. Li, author M. Alghamdi, author D. Djugba,
author J. Shi, author
H. Wang, and author
C. R. Du, title title Wide field imaging of van der Waals ferromagnet
Fe_3GeTe_2 by spin defects in hexagonal boron nitride, https://doi.org/10.1038/s41467-022-33016-2 journal
journal Nature Communications volume
13, pages 5369 (year 2022)NoStop
[Healey et al.(2022)Healey,
Scholten, Yang, Scott,
Abrahams, Robertson, Hou,
Guo, Rahman, Lu,
Kianinia, Aharonovich, and Tetienne]Healey2022
author author A. J. Healey, author S. C. Scholten, author T. Yang,
author J. A. Scott, author G. J. Abrahams, author
I. O. Robertson, author
X. F. Hou, author Y. F. Guo, author S. Rahman, author Y. Lu, author M. Kianinia, author I. Aharonovich, and author J.-P. Tetienne, title title Quantum
microscopy with van der Waals heterostructures, https://doi.org/10.1038/s41567-022-01815-5 journal journal Nature Physics volume 19, pages 87–91 (year 2022)NoStop
[Kumar et al.(2022)Kumar,
Fabre, Durand, Clua-Provost,
Li, Edgar, Rougemaille,
Coraux, Marie, Renucci,
Robert, Robert-Philip, Gil,
Cassabois, Finco, and Jacques]Kumar2022
author author P. Kumar, author F. Fabre,
author A. Durand, author T. Clua-Provost, author
J. Li, author J. Edgar, author N. Rougemaille, author J. Coraux, author X. Marie, author P. Renucci, author C. Robert, author I. Robert-Philip, author B. Gil, author G. Cassabois, author A. Finco, and author V. Jacques, title title Magnetic imaging with spin
defects in hexagonal boron nitride, https://doi.org/10.1103/physrevapplied.18.l061002 journal
journal Physical Review Applied volume
18, pages L061002 (year 2022)NoStop
[Sasaki et al.(2023)Sasaki,
Nakamura, Gu, Tsukamoto,
Nakaharai, Iwasaki, Watanabe,
Taniguchi, Ogawa, Morita, and Kobayashi]Sasaki2023
author author K. Sasaki, author Y. Nakamura,
author H. Gu, author
M. Tsukamoto, author
S. Nakaharai, author
T. Iwasaki, author K. Watanabe, author T. Taniguchi, author S. Ogawa, author Y. Morita, and author K. Kobayashi, title title Magnetic field imaging by hBN quantum sensor nanoarray, https://doi.org/10.1063/5.0147072 journal journal Applied Physics Letters volume 122, pages 244003 (year 2023)NoStop
[Vuong et al.(2017)Vuong,
Liu, der Lee, Cuscó,
Artús, Michel, Valvin,
Edgar, Cassabois, and Gil]Vuong2017
author author T. Q. P. Vuong, author S. Liu, author A. V. der Lee,
author R. Cuscó, author L. Artús, author
T. Michel, author P. Valvin, author J. H. Edgar, author G. Cassabois, and author B. Gil, title title Isotope engineering
of van der Waals interactions in hexagonal boron nitride, https://doi.org/10.1038/nmat5048 journal journal
Nature Materials volume 17, pages
152–158 (year 2017)NoStop
[Cuscó et al.(2018)Cuscó, Artús, Edgar,
Liu, Cassabois, and Gil]Cusc2018
author author R. Cuscó, author L. Artús, author J. H. Edgar, author S. Liu, author G. Cassabois, and author B. Gil, title
title Isotopic effects on phonon anharmonicity in
layered van der Waals crystals: Isotopically pure hexagonal boron
nitride, https://doi.org/10.1103/physrevb.97.155435 journal journal Physical Review B volume 97, pages 155435 (year
2018)NoStop
[Haykal et al.(2022)Haykal,
Tanos, Minotto, Durand,
Fabre, Li, Edgar,
Ivády, Gali, Michel,
Dréau, Gil, Cassabois, and Jacques]Haykal2022
author author A. Haykal, author R. Tanos,
author N. Minotto, author A. Durand, author
F. Fabre, author J. Li, author J. H. Edgar, author V. Ivády, author A. Gali,
author T. Michel, author A. Dréau, author
B. Gil, author G. Cassabois, and author V. Jacques, title title Decoherence of V_B^- spin defects in monoisotopic
hexagonal boron nitride, https://doi.org/10.1038/s41467-022-31743-0 journal journal Nature Communications volume 13, pages 4347 (year 2022)NoStop
[Janzen et al.(2023)Janzen,
Schutte, Plo, Rousseau,
Michel, Desrat, Valvin,
Jacques, Cassabois, Gil, and Edgar]Janzen2023
author author E. Janzen, author H. Schutte,
author J. Plo, author
A. Rousseau, author
T. Michel, author W. Desrat, author P. Valvin, author V. Jacques, author G. Cassabois, author B. Gil, and author J. H. Edgar, title title
Boron and nitrogen isotope effects on hexagonal boron nitride properties, https://doi.org/10.48550/ARXIV.2306.13358 (year
2023), 10.48550/ARXIV.2306.13358NoStop
[Chen et al.(2020)Chen,
Song, Ravichandran, Zheng,
Chen, Lee, Sun, Li, Gamage, Tian, Ding,
Song, Rai, Wu, Koirala, Schmidt, Watanabe, Lv, Ren, Shi, Cahill,
Taniguchi, Broido, and Chen]Chen2020
author author K. Chen, author B. Song, author N. K. Ravichandran, author Q. Zheng, author
X. Chen, author H. Lee, author H. Sun, author S. Li, author G. A. G. U. Gamage, author F. Tian, author
Z. Ding, author Q. Song, author A. Rai, author H. Wu, author P. Koirala, author
A. J. Schmidt, author
K. Watanabe, author
B. Lv, author Z. Ren, author L. Shi, author D. G. Cahill,
author T. Taniguchi, author D. Broido, and author
G. Chen, title title Ultrahigh thermal conductivity in isotope-enriched cubic
boron nitride, https://doi.org/10.1126/science.aaz6149 journal journal Science volume
367, pages 555–559 (year 2020)NoStop
[Taniguchi et al.()Taniguchi
et al.]TaniguchiXXXX
author author T. Taniguchi et al., @noop note Unpublished
study.Stop
[Gao et al.(2022)Gao,
Vaidya, Li, Ju, Jiang, Xu, Allcca, Shen,
Taniguchi, Watanabe, Bhave,
Chen, Ping, and Li]Gao2022
author author X. Gao, author S. Vaidya,
author K. Li, author
P. Ju, author B. Jiang, author Z. Xu, author A. E. L. Allcca, author K. Shen, author T. Taniguchi,
author K. Watanabe, author S. A. Bhave, author
Y. P. Chen, author
Y. Ping, and author
T. Li, title title Nuclear spin polarization and control in hexagonal boron
nitride, https://doi.org/10.1038/s41563-022-01329-8 journal journal Nature Materials volume 21, pages 1024–1028 (year
2022)NoStop
[Gracheva et al.(2023)Gracheva, Murzakhanov, Mamin, Sadovnikova, Gabbasov, Mokhov, and Gafurov]Gracheva2023
author author I. N. Gracheva, author F. F. Murzakhanov, author G. V. Mamin, author M. A. Sadovnikova, author B. F. Gabbasov, author E. N. Mokhov, and author M. R. Gafurov, title title Symmetry of the
hyperfine and quadrupole interactions of boron vacancies in a hexagonal boron
nitride, https://doi.org/10.1021/acs.jpcc.2c08716 journal journal The Journal of Physical Chemistry C volume 127, pages 3634–3639 (year 2023)NoStop
[Taniguchi and Watanabe(2007)]Taniguchi2007
author author T. Taniguchi and author K. Watanabe, title title Synthesis of
high-purity boron nitride single crystals under high pressure by using
Ba–BN solvent, https://doi.org/10.1016/j.jcrysgro.2006.12.061 journal
journal Journal of Crystal Growth volume
303, pages 525–529 (year 2007)NoStop
[Stenger et al.(2017)Stenger, Schué, Boukhicha,
Berini, Plaçais, Loiseau, and Barjon]Stenger2017
author author I. Stenger, author L. Schué, author M. Boukhicha, author B. Berini,
author B. Plaçais, author A. Loiseau, and author
J. Barjon, title title Low frequency raman spectroscopy of few-atomic-layer thick
hBN crystals, https://doi.org/10.1088/2053-1583/aa77d4
journal journal 2D Materials volume 4, pages 031003 (year
2017)NoStop
[Misonou et al.(2020)Misonou, Sasaki, Ishizu, Monnai, Itoh, and Abe]Misonou2020
author author D. Misonou, author K. Sasaki,
author S. Ishizu, author Y. Monnai, author
K. M. Itoh, and author
E. Abe, title title Construction and operation of a tabletop system for
nanoscale magnetometry with single nitrogen-vacancy centers in diamond, https://doi.org/10.1063/1.5128716 journal journal AIP Advances volume 10, pages 025206 (year 2020)NoStop
[Murzakhanov et al.(2022)Murzakhanov, Mamin, Orlinskii,
Gerstmann, Schmidt, Biktagirov, Aharonovich, Gottscholl,
Sperlich, Dyakonov, and Soltamov]Murzakhanov2022
author author F. F. Murzakhanov, author G. V. Mamin, author S. B. Orlinskii, author U. Gerstmann, author W. G. Schmidt, author T. Biktagirov,
author I. Aharonovich, author A. Gottscholl, author
A. Sperlich, author
V. Dyakonov, and author
V. A. Soltamov, title
title Electron-nuclear coherent coupling and nuclear
spin readout through optically polarized V_B^- spin states in
hBN, https://doi.org/10.1021/acs.nanolett.1c04610 journal journal Nano Letters volume
22, pages 2718–2724 (year 2022)NoStop
[Gu et al.(2023)Gu,
Nakamura, Sasaki, and Kobayashi]Gu2023
author author H. Gu, author Y. Nakamura,
author K. Sasaki, and author K. Kobayashi, title
title Multi-frequency composite pulse sequences for
sensitivity enhancement in hexagonal boron nitride quantum sensor, https://doi.org/10.35848/1882-0786/acd1d1 journal journal Applied Physics Express volume 16, pages 055003 (year 2023)NoStop
[Ru et al.(2023)Ru,
Jiang, Liang, Kenny,
Cai, Lyu, Cernansky,
Zhou, Yang, Watanabe,
Taniguch, Li, Seng,
Liu, Jelezko, Bettiol, and Gao]Shihao2023
author author S. Ru, author Z. Jiang, author H. Liang, author
J. Kenny, author H. Cai, author X. Lyu, author R. Cernansky,
author F. Zhou, author
Y. Yang, author K. Watanabe, author T. Taniguch, author F. Li, author K. T. Seng, author X. Liu, author F. Jelezko,
author A. A. Bettiol, and author W. Gao, title title Robust nuclear spin polarization via
ground-state level anti-crossing of boron vacancy defects in hexagonal boron
nitride, https://doi.org/10.48550/ARXIV.2306.15960 (year 2023), 10.48550/ARXIV.2306.15960NoStop
[Jacques et al.(2009)Jacques, Neumann, Beck, Markham, Twitchen, Meijer, Kaiser, Balasubramanian, Jelezko, and Wrachtrup]Jacques2009
author author V. Jacques, author P. Neumann,
author J. Beck, author
M. Markham, author D. Twitchen, author J. Meijer, author F. Kaiser, author G. Balasubramanian, author F. Jelezko, and author J. Wrachtrup, title title Dynamic polarization of single nuclear spins by optical pumping of
nitrogen-vacancy color centers in diamond at room temperature, https://doi.org/10.1103/physrevlett.102.057403 journal
journal Physical Review Letters volume
102, pages 057403 (year 2009)NoStop
[Broadway et al.(2018)Broadway, Tetienne, Stacey, Wood, Simpson, Hall, and Hollenberg]Broadway2018
author author D. A. Broadway, author J.-P. Tetienne, author A. Stacey,
author J. D. A. Wood, author D. A. Simpson, author
L. T. Hall, and author
L. C. L. Hollenberg, title
title Quantum probe hyperpolarisation of molecular
nuclear spins, https://doi.org/10.1038/s41467-018-03578-1
journal journal Nature Communications volume 9, pages 1246 (year
2018)NoStop
[Jannin et al.(2019)Jannin,
Dumez, Giraudeau, and Kurzbach]Jannin2019
author author S. Jannin, author J.-N. Dumez,
author P. Giraudeau, and author D. Kurzbach, title title Application and methodology of
dissolution dynamic nuclear polarization in physical, chemical and biological
contexts, https://doi.org/10.1016/j.jmr.2019.06.001 journal journal Journal of Magnetic Resonance volume 305, pages 41–50 (year
2019)NoStop
[Mendelson et al.(2020)Mendelson, Chugh, Reimers, Cheng, Gottscholl, Long, Mellor, Zettl, Dyakonov, Beton, Novikov, Jagadish, Tan, Ford, Toth, Bradac, and Aharonovich]Mendelson2020
author author N. Mendelson, author D. Chugh,
author J. R. Reimers, author T. S. Cheng, author
A. Gottscholl, author
H. Long, author C. J. Mellor, author A. Zettl, author V. Dyakonov, author P. H. Beton, author S. V. Novikov, author C. Jagadish,
author H. H. Tan, author M. J. Ford, author
M. Toth, author C. Bradac, and author I. Aharonovich, title title Identifying carbon as the source of visible single-photon emission
from hexagonal boron nitride, https://doi.org/10.1038/s41563-020-00850-y journal journal Nature Materials volume 20, pages 321–328 (year 2020)NoStop
[Chejanovsky et al.(2021)Chejanovsky, Mukherjee, Geng, Chen, Kim, Denisenko, Finkler, Taniguchi, Watanabe, Dasari, Auburger, Gali, Smet, and Wrachtrup]Chejanovsky2021
author author N. Chejanovsky, author A. Mukherjee, author J. Geng,
author Y.-C. Chen, author Y. Kim, author
A. Denisenko, author
A. Finkler, author T. Taniguchi, author K. Watanabe, author D. B. R. Dasari, author P. Auburger, author A. Gali,
author J. H. Smet, and author J. Wrachtrup, title title Single-spin resonance in a van der
Waals embedded paramagnetic defect, https://doi.org/10.1038/s41563-021-00979-4 journal journal Nature Materials volume 20, pages 1079–1084 (year 2021)NoStop
[Stern et al.(2023)Stern,
Gilardoni, Gu, Barker,
Powell, Deng, Follet,
Li, Ramsay, Tan,
Aharonovich, and Atatüre]Stern2023
author author H. L. Stern, author C. M. Gilardoni, author Q. Gu,
author S. E. Barker, author O. Powell, author
X. Deng, author L. Follet, author C. Li, author A. Ramsay, author H. H. Tan,
author I. Aharonovich, and author M. Atatüre, title title A quantum coherent spin in a
two-dimensional material at room temperature, https://doi.org/10.48550/ARXIV.2306.13025 (year 2023), 10.48550/ARXIV.2306.13025NoStop
[Scholten et al.(2023)Scholten, Singh, Healey, Robertson, Haim, Tan, Broadway, Wang, Abe, Ohshima, Kianinia, Reineck, Aharonovich, and Tetienne]Scholten2023
author author S. C. Scholten, author P. Singh,
author A. J. Healey, author I. O. Robertson, author
G. Haim, author C. Tan, author D. A. Broadway, author L. Wang, author H. Abe, author T. Ohshima, author
M. Kianinia, author
P. Reineck, author I. Aharonovich, and author J.-P. Tetienne, title title Multi-species optically addressable spin defects in a van der
Waals material, https://doi.org/10.48550/ARXIV.2306.16600 (year 2023), 10.48550/ARXIV.2306.16600NoStop
§ SPIN HAMILTONIAN
In this section, we explain the spin Hamiltonian.
The spin Hamiltonian of the ground state of a V_B defect would be given as,
Ĥ = Ĥ_ZFS + Ĥ_Ze + Ĥ_Zn + Ĥ_HFI + Ĥ_QI,
Ĥ_ZFS = D Ŝ_z^2
- E_x (Ŝ_xŜ_y + Ŝ_yŜ_x) - E_x (Ŝ_x^2 - Ŝ_y^2),
Ĥ_Ze = γ_e B_0 ·Ŝ,
Ĥ_Zn = ∑_j=1^3 -γ_(j)B_0 ·Î_(j),
Ĥ_HFI = ∑_j=1^3Ŝ A_HFI,(j)Î_(j),
Ĥ_QI = ∑_j=1^3 P_p(j),(j)Î_p(j),(j)^2 + P_z,(j)Î_z,(j)^2 + P_o(j),(j)Î_o(j),(j)^2,
where, z is the direction perpendicular to the hBN plane (the direction of the symmetry axis of the V_B defect), x and y are the in-plane directions, D is the zero-field splitting (ZFS) including the effects of electric field and strain, γ_e = 28 MHz/mT is the gyromagnetic ratio of electron spin, B_0 is the magnetic field vector, E_x and E_y are strain parameters related to local electric field and crystal strain<cit.>, j (=1,2,3) are labels of nearest-neighbor nitrogen sites, γ_(j) is gyromagnetic ratio of nitrogen nuclear spins, A_HFI,(j) is hyperfine interaction (HFI) tensor, Î_k,(j) is nuclear spin operator in the k direction, and P_k,(j) is the nuclear quadrupole moment in the k direction.
Ĥ_ZFS is the ZFS term and Ĥ_Ze is the Zeeman term of the electron spin.
We assume that the strain terms take the same form as the NV center in diamond <cit.>, which has a symmetry close to the V_B defect.
Typical parameter values for V_B defects are D∼3450MHz and E_x,E_y∼ 50MHz <cit.>.
Ĥ_Zn is the Zeeman term of nuclear spin, Ĥ_HFI is the HFI term, and Ĥ_QI is the nuclear quadrupole moment term.
They are based on the form of Ref. Gracheva2023.
p(j) is the direction from the vacancy (electron spin) of the nearest nitrogen site j, and the direction o(j) is the cross product direction of the p(j) and z.
The gyromagnetic ratio is γ_14N = 3.077 kHz/mT for ^14N spin and γ_15N = -4.316 kHz/mT for ^15N spin.
The interaction with boron and remote nitrogen spins other than the nearest-neighbor ones is small and appears as a broadening of the electron spin resonance linewidth<cit.>, so we do not consider its details.
We introduce an approximation that is valid under quantum sensing conditions.
When a magnetic field is applied with sufficient strength in the direction of the symmetry axis (B_0 = B_z e_z), the effect of strain, which degrades the magnetic field sensitivity, can be ignored.
Specifically, this condition is given by B_z ≫ E_x(y)/γ_e.
Except in the vicinity of the ground state level anticrossing (D/γ_e ∼ 125 mT), the Hamiltonian can be approximated as,
Ĥ_ZFS ∼ D Ŝ_z^2
Ĥ_Ze = γ_e B_z Ŝ_z,
Ĥ_HFI ∼Ŝ_z ∑_j=1^3 ( A_zx,(j)Î_x,(j) + A_zy,(j)Î_y,(j) + A_zz,(j)Î_z,(j) ),
where A_zx, A_zy, and A_zz are the elements of HFI tensor.
Within this approximation, the electron spin is quantized in the z direction.
Then, we also introduce an approximation to the nuclear spin terms.
The HFI tensor consists of the dipole interaction and the Fermi contact interaction.
The element of the dipole interaction tensor between electron and nuclear spins is given by,
^dipoleA_αβ = μ_0/4πh γ_e γ_n/r^3 [ 3 (r·e_α)(r·e_β) - e_α·e_β],
where α (= x,y,z) is the direction of the electron spin, β (= x,y,z) is the direction of the nuclear spin, h is Plank constant, r is the position of the nuclear spin with respect to the electron spin.
Since the electron spin is quantized in the z direction, only the α = z term needs to be considered.
Assuming that the electron spin is localized at the vacancy position, r·e_z=0 is satisfied, and we obtain,
^dipoleA_zz = -μ_0/4πh γ_e γ_n/r^3,
^dipoleA_zx = 0,
^dipoleA_zy = 0.
The Fermi contact interaction ^FermiA is a term arising from the overlapping of wave functions of electron and nuclear spins and is zero except for the isotropic component (α = β).
Thus, the HFI term can be approximated as,
Ĥ_HFI ∼Ŝ_z ∑_j=1^3 A_zz,(j)Î_z,(j).
A_zz,(j) and typical line widths of the V_B defects are around 40 MHz or larger.
Under typical experimental conditions, they are an order of magnitude larger than the nuclear spin's Zeeman effect and nuclear quadrupole moment.
Therefore, we neglect nuclear spin terms other than HFI and express the effective spin Hamiltonian as,
Ĥ = D Ŝ_z^2 + γ_e B_z Ŝ_z + Ŝ_z ∑_j=1^3 A_zz,(j)Î_z,(j).
It corresponds to Eq. (1) in the main text and is equivalent to ignoring the nuclear spin's Zeeman effect in the Eq. (8) of the Supplementary Information of Ref. Gao2022.
In this condition, each nitrogen nuclear spin is quantized in the z direction, and energy states according to their total quantum number m_I,tot can be observed.
§ ADDITIONAL DATA OF FIGURE 3 IN THE MAIN TEXT
This section contains additional data related to Fig. 3 in the main text.
Figures <ref>(a) and (b) are enlarged images of Figs. 3(a) and (c) in the main text, respectively.
Based on the fitting results, the signals of each resonance line are decomposed and shown.
The signal of hB^15N [Fig. <ref>(b)] has a simpler spectrum with higher contrast and narrower linewidths overall than conventional case [Fig. <ref>(a)] due to that the number of included resonance lines is small and the separation of each is large.
Slight bias in signal contrast appears as a deviation from fitting.
It is not identify whether the slight bias in the signal contrast, which appears as a deviation from the fitting.
We have not yet identified its cause.
The possible causes are polarization of nuclear spins or frequency dependence of microwave power.
Figure <ref> contains additional data of dynamic nuclear polarization at excited state level anticrossing.
The condition for excited state anticrossing is estimated to be 76 mT from the zero-field splitting of 2130 MHz obtained from the ODMR spectrum of excited state measured at zero field.
We show ODMR spectrum around the condition in Fig. <ref>(a).
We observe a property that biases the spectrum toward the high frequency side around 70 mT.
Figure <ref>(b) shows the ^15N spin polarization estimated by <cit.>,
Polarization = ∑ m_I,tot A_m_I,tot/3/2∑ A_m_I,tot,
where A_m_I,tot is the area of the spectrum belonging to the m_I,tot state, estimated from the product of signal amplitude and line width obtained by fitting each spectrum.
The summation symbols in the denominator and numerator are for the possible m_I,tot states.
Polarization reaches a maximum around 70 mT.
It reaches its maximum around 70 mT, near the condition where polarization increases with increasing laser power [Fig. <ref>(c)].
It is a typical behavior of optical nuclear spin polarization at the excited state level anticrossing <cit.>.
We determine the sign of the HFI parameter of ^15N spin based on this evidence.
The above observations reveal several behaviors that have not been observed before.
First point is the magnetic field condition which achieves maximum polarization.
It differs from the previous works (∼74 mT) <cit.>.
Second and third points are a decrease in signal contrast around 73 mT and polarization sign reversal above 75 mT, respectively.
It is unlikely that the frequency dependence of microwave power is responsible for this since the contrast of the ODMR spectra at the same microwave frequency is very different at different magnetic field conditions.
It remains to be clarified whether this is due to ^15N isotope effects, field misalignment, other defects in the sample, etc.
^15N isotope enrichment may have allowed us to observe such behaviors that could not be observed with the conventional broad anticrossing condition.
We believe that these interesting behaviors will be elucidated in future studies of ODMR spectra and ^15N spin polarization in a wide magnetic field range, including ground state level anticrossing <cit.>.
10
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Dolde et al.(2011)Dolde,
Fedder, Doherty, Nöbauer,
Rempp, Balasubramanian, Wolf,
Reinhard, Hollenberg, Jelezko, and Wrachtrup]Dolde2011
author author F. Dolde, author H. Fedder,
author M. W. Doherty, author T. Nöbauer, author
F. Rempp, author G. Balasubramanian, author T. Wolf, author F. Reinhard, author L. C. L. Hollenberg, author F. Jelezko, and author J. Wrachtrup, title title
Electric-field sensing using single diamond spins, https://doi.org/10.1038/nphys1969 journal journal Nature Physics volume 7, pages 459–463 (year 2011)NoStop
[Mittiga et al.(2018)Mittiga, Hsieh, Zu, Kobrin,
Machado, Bhattacharyya, Rui,
Jarmola, Choi, Budker, and Yao]Mittiga2018
author author T. Mittiga, author S. Hsieh,
author C. Zu, author
B. Kobrin, author F. Machado, author P. Bhattacharyya, author N. Rui, author A. Jarmola, author S. Choi,
author D. Budker, and author N. Yao, title
title Imaging the local charge environment of
nitrogen-vacancy centers in diamond, https://doi.org/10.1103/physrevlett.121.246402 journal
journal Physical Review Letters volume
121, pages 246402 (year 2018)NoStop
[Gottscholl et al.(2020)Gottscholl, Kianinia, Soltamov,
Orlinskii, Mamin, Bradac,
Kasper, Krambrock, Sperlich,
Toth, Aharonovich, and Dyakonov]Gottscholl2020
author author A. Gottscholl, author M. Kianinia, author V. Soltamov,
author S. Orlinskii, author G. Mamin, author
C. Bradac, author C. Kasper, author K. Krambrock, author A. Sperlich, author M. Toth, author I. Aharonovich, and author V. Dyakonov, title title Initialization
and read-out of intrinsic spin defects in a van der Waals crystal at room
temperature, https://doi.org/10.1038/s41563-020-0619-6 journal journal Nature Materials volume 19, pages 540–545 (year
2020)NoStop
[Gu et al.(2023)Gu,
Nakamura, Sasaki, and Kobayashi]Gu2023
author author H. Gu, author Y. Nakamura,
author K. Sasaki, and author K. Kobayashi, title
title Multi-frequency composite pulse sequences for
sensitivity enhancement in hexagonal boron nitride quantum sensor, https://doi.org/10.35848/1882-0786/acd1d1 journal journal Applied Physics Express volume 16, pages 055003 (year 2023)NoStop
[Ivády et al.(2020)Ivády, Barcza, Thiering,
Li, Hamdi, Chou,
Örs Legeza, and Gali]Ivdy2020
author author V. Ivády, author G. Barcza,
author G. Thiering, author S. Li, author
H. Hamdi, author J.-P. Chou, author Örs
Legeza, and author A. Gali, title title Ab initio theory of the
negatively charged boron vacancy qubit in hexagonal boron nitride, https://doi.org/10.1038/s41524-020-0305-x journal journal npj Computational Materials volume 6, pages 41 (year 2020)NoStop
[Gottscholl et al.(2021)Gottscholl, Diez, Soltamov, Kasper, Krauße, Sperlich, Kianinia, Bradac, Aharonovich, and Dyakonov]Gottscholl2021
author author A. Gottscholl, author M. Diez,
author V. Soltamov, author C. Kasper, author
D. Krauße, author
A. Sperlich, author
M. Kianinia, author
C. Bradac, author I. Aharonovich, and author V. Dyakonov, title title Spin defects in hBN as promising temperature, pressure and
magnetic field quantum sensors, https://doi.org/10.1038/s41467-021-24725-1 journal journal Nature Communications volume 12, pages 4480 (year 2021)NoStop
[Gao et al.(2022)Gao,
Vaidya, Li, Ju, Jiang, Xu, Allcca, Shen,
Taniguchi, Watanabe, Bhave,
Chen, Ping, and Li]Gao2022
author author X. Gao, author S. Vaidya,
author K. Li, author
P. Ju, author B. Jiang, author Z. Xu, author A. E. L. Allcca, author K. Shen, author T. Taniguchi,
author K. Watanabe, author S. A. Bhave, author
Y. P. Chen, author
Y. Ping, and author
T. Li, title title Nuclear spin polarization and control in hexagonal boron
nitride, https://doi.org/10.1038/s41563-022-01329-8 journal journal Nature Materials volume 21, pages 1024–1028 (year
2022)NoStop
[Gracheva et al.(2023)Gracheva, Murzakhanov, Mamin, Sadovnikova, Gabbasov, Mokhov, and Gafurov]Gracheva2023
author author I. N. Gracheva, author F. F. Murzakhanov, author G. V. Mamin, author M. A. Sadovnikova, author B. F. Gabbasov, author E. N. Mokhov, and author M. R. Gafurov, title title Symmetry of the
hyperfine and quadrupole interactions of boron vacancies in a hexagonal boron
nitride, https://doi.org/10.1021/acs.jpcc.2c08716 journal journal The Journal of Physical Chemistry C volume 127, pages 3634–3639 (year 2023)NoStop
[Haykal et al.(2022)Haykal,
Tanos, Minotto, Durand,
Fabre, Li, Edgar,
Ivády, Gali, Michel,
Dréau, Gil, Cassabois, and Jacques]Haykal2022
author author A. Haykal, author R. Tanos,
author N. Minotto, author A. Durand, author
F. Fabre, author J. Li, author J. H. Edgar, author V. Ivády, author A. Gali,
author T. Michel, author A. Dréau, author
B. Gil, author G. Cassabois, and author V. Jacques, title title Decoherence of V_B^- spin defects in monoisotopic
hexagonal boron nitride, https://doi.org/10.1038/s41467-022-31743-0 journal journal Nature Communications volume 13, pages 4347 (year 2022)NoStop
[Ru et al.(2023)Ru,
Jiang, Liang, Kenny,
Cai, Lyu, Cernansky,
Zhou, Yang, Watanabe,
Taniguch, Li, Seng,
Liu, Jelezko, Bettiol, and Gao]Shihao2023
author author S. Ru, author Z. Jiang, author H. Liang, author
J. Kenny, author H. Cai, author X. Lyu, author R. Cernansky,
author F. Zhou, author
Y. Yang, author K. Watanabe, author T. Taniguch, author F. Li, author K. T. Seng, author X. Liu, author F. Jelezko,
author A. A. Bettiol, and author W. Gao, title title Robust nuclear spin polarization via
ground-state level anti-crossing of boron vacancy defects in hexagonal boron
nitride, https://doi.org/10.48550/ARXIV.2306.15960 (year 2023), 10.48550/ARXIV.2306.15960NoStop
|
http://arxiv.org/abs/2307.04973v1 | 20230711022745 | SAM-U: Multi-box prompts triggered uncertainty estimation for reliable SAM in medical image | [
"Guoyao Deng",
"Ke Zou",
"Kai Ren",
"Meng Wang",
"Xuedong Yuan",
"Sancong Ying",
"Huazhu Fu"
] | cs.CV | [
"cs.CV"
] |
G.Deng et al.
National Key Laboratory of Fundamental Science on Synthetic Vision,
Sichuan
University, Sichuan, China College of Computer Science, Sichuan University, Sichuan, China Institute of High Performance Computing, A*STAR, Singapore
SAM-U: Multi-box prompts triggered uncertainty estimation for reliable SAM in medical image
Guoyao Deng1, Ke Zou1,3, Kai Ren2, Meng Wang3, Xuedong Yuan2, Sancong Ying2 and Huazhu Fu3
August 12, 2023
==============================================================================================
Recently, Segmenting Anything Model has taken a significant step towards general artificial intelligence. Simultaneously, its reliability and fairness have garnered significant attention, particularly in the field of healthcare. In this study, we propose a multi-box prompt-triggered uncertainty estimation for SAM cues to demonstrate the reliability of segmented lesions or tissues. We estimate the distribution of SAM predictions using Monte Carlo with prior distribution parameters, employing different prompts as a formulation of test-time augmentation. Our experimental results demonstrate that multi-box prompts augmentation enhances SAM performance and provides uncertainty for each pixel. This presents a groundbreaking paradigm for a reliable SAM.
§ INTRODUCTION
Large-scale foundation models are increasingly gaining popularity among artificial intelligence researchers. In the realm of natural language processing (NLP), the Generative Pre-trained Transformer (GPT) <cit.> and ChatGPT, developed by OpenAI, have witnessed rapid growth owing to their exceptional ability to generalize. These models have found applications in diverse domains such as autonomous driving and healthcare. The remarkable generalization capabilities of large models often instill a sense of trust among users; however, their fairness and reliability have also been subject to some degree of scrutiny.
Nowadays, there is a growing wave of enthusiasm surrounding computer vision due to the release of the Segment Anything Model (SAM) <cit.>by Meta AI. SAM has been trained on a massive SA-1B dataset, which consists of over 11 million images and one billion masks, making it an excellent tool. It excels at producing accurate segmentation results from various types of prompts, including foreground/background points, thick boxes or masks, and free-form text. The introduction of SAM has led many researchers to believe that general artificial intelligence has finally arrived. However, some researchers have expressed concerns about the performance of SAM <cit.>. Specifically, they have identified areas such as industrial defect detection <cit.>, camouflaged target detection <cit.>, and tumor and lesion segmentation <cit.>in medical images where further improvements are needed. Additionally, the reliability of SAM still requires further study.
Uncertainty estimation <cit.> is one of the ways to provide reliability for SAM. Previously, uncertainty estimation has demonstrated its reliability and robustness in several medical segmentation tasks <cit.>, including skin lesions and brain tumors <cit.>, among others. The current uncertainty estimation methods can be roughly divided into deterministic-based methods <cit.>, Bayesian Neural Network-based methods <cit.>, ensemble-based methods <cit.>, dropout-based methods <cit.> and test-time augmentation-based methods <cit.>. The focus of this paper is to keep the simplicity and retain the original structure of SAM while achieving pixel-level uncertainty estimation.
In Fig.<ref>, we present the eye disc segmentation results<cit.> for both high and low-quality fundus images under different conditions. SAM demonstrates better segmentation results for high-quality images, and the inclusion of different conditions leads to certain performance improvements. However, SAM's segmentation results for lower quality images are not satisfactory. Nevertheless, the inclusion of different conditions greatly enhances its performance, particularly with more accurate box prompts. Furthermore, we have observed a phenomenon wherein different levels of box prompts tend to yield diverse results. This observation motivates us to introduce a novel approach, namely multi-box prompts-induced uncertainty estimation, for medical images.
Therefore, the primary focus of this paper is to enhance the segmentation accuracy by employing multiple box prompts. This approach enables us to establish pixel-level reliability through uncertainty estimation. Specifically, we utilize SAM to predict the output distribution using different multi-box prompts. SAM with multi-box prompts generates numerous samples from the predictive distribution. Subsequently, these samples are used to calculate variance, which provide an uncertainty estimation for the medical image segmentation. Our experiments demonstrate that multi-box prompts not only enhances performance on low-quality medical images but also provides uncertainty estimation for them.
§ METHOD
The overall framework of our proposed method is depicted in Fig. <ref>. Our main focus is to enhance the reliability and accuracy of SAM in the context of zero-shot learning. To improve the accuracy of SAM, we incorporate multi-box prompts, which enable us to obtain more precise medical image segmentation results from the distribution. Specifically, we estimate the distribution of SAM predictions using Monte Carlo simulation with prior distribution parameters. This approach allows our method to estimate the aleatoric uncertainty by considering multiple forecasts for a single medical image.
§.§ Mask Selection Strategy
Under the unprompted setting, SAM generates multiple binary masks and can pop out several potential objects within an input. For a fair evaluation of interesting regions in a specific segmentation task, we follow the strategy of <cit.>to select the most appropriate mask based on its ground-truth mask. Formally, given N binary predictions {y^i}_i^N=1 and the ground-truth G for an input image, we calculate Dice scores for each pair to generate a set of evaluation scores {D^i}_i^N=1. We finally select the mask with the highest Dice score from this set.
§.§ SAM with multi-box prompts
Prompts can introduce errors into the model's inferring due to their inherent inaccuracies. In order to reduce the influence of the variance of the prompt. We randomize M box prompts B={b^1,b^2,⋯,b^M}. Each box prompt guides SAM generates different segmentation results. Through this strategy, we obtain the predictions Y={y^1,y^2,⋯,y^M} of SAM under different prior cues, and combining them can improve the segmentation accuracy of SAM and reduce uncertainty. The combined prediction is computed as:
ŷ = 1/M∑_i = 1^M f_SAM( I,b^i) ,
where y_C denotes the combined prediction of image I.
§.§ Uncertainty estimation of SAM with multi-box prompts
Different box prompts cause variances in SAM's segmentation even if they refer to one object in human's view. Inspired by this, our proposed multi-box prompts (MNP) algorithm simulates the annotations of multiple clinical experts to generate the final predictions and uncertainty estimations. To quantify the uncertainty triggered by multi-box prompts. Assume M box prompts B={b^1,b^2,⋯,b^M} that all refers to the ground truth. With M box prompts and input image I, SAM generate a set of predictions Y={y^1,y^2,⋯,y^M}. As shown in Fig. <ref>, We present an uncertainty estimation procedure for multi-box prompts.
We first describe aleatoric uncertainty from a single given image I by the entropy <cit.>:
U(y^i) = -∫ p(y^i|I) logp(y^i|I) dy,
U(y^i) estimates how diverse the prediction y^i for the image I. where y^i = { p__1^i,p__2^i, ⋯ ,p__N^i} denote the prediction pixels. N denotes the unique values in y^i.
Then, We run a Monte Carlo simulation using multi-box prompts to obtain a set of predictions. Therefore, the uncertainty distribution is approximated as follows:
U(Y|I) ≈∑_i = 1^M ∑_j = 1^N p_j^ilog p_j^i,
§ EXPERIMENTS AND RESULTS
Two different methods are utilized to perform image degradation to verify the reliability of SAM. In this section, we will describe our evaluation protocols, compare the performance of SAM under different quality datasets, and visualize the qualitative results on fundus image segmentation.
§.§ Evaluation Protocols
∙ Dataset. We chose the sub-task of the REFUGE Challenge <cit.>, which does the segmentation of the optic cup and disc in fundus photographs. For simplicity's sake, we consider disc and cup as one category. In order to evaluate the reliability of SAM more objectively, we artificially constructed low-quality data based on high-quality source data by two different methods, which is introducing Gaussian noise with various levels of standard deviations (σ) and the realistic degradation model proposed by Shen et al. <cit.>, respectively.
∙ Metrics. We use four commonly-used metrics for the evaluation: dice score (Dice), expected calibration error (ECE) <cit.>, structure measure (Sm) <cit.> and weighted F-measure (wFm) <cit.>.
§.§ Quantitative Evaluation
As shown in Table <ref>, we present different segmentation results of SAM modes using high-quality medical images. Initially, we compare the segmentation results of SAM in "everything" mode and SAM in "box" mode on normal medical images. It was found that the results using SAM in "box" mode were superior. Moreover, with the introduction of our algorithm, the performance of SAM improved further. Table <ref> and Table <ref> demonstrate various segmentation results of SAM modes under Gaussian noise and degraded medical images. We compare the results obtained from the aforementioned SAM modes. The performance of SAM in "everything" mode and SAM in "box" mode has declined, whereas the performance of SAM with "multi-box" mode remains at a certain level, with a lower ECE index. Therefore, it can be concluded that the inclusion of multi-box prompts enhances the accuracy and reliability of SAM.
§.§ Qualitative Comparison
As shown in Fig. <ref>, we first show the uncertainty estimation results under SAM with multi-box mode. As can be seen from it, the periphery of the eye disc is clearly marked as an area of uncertainty. Furthermore, we compare the segmentation results of different modes of SAM under normal and degraded medical images, as shown in Fig. <ref>. In SAM with everything mode, it is difficult to segment the eye disc. Under the box prompt, the eye disc can be segmented under normal conditions, but the results under Gaussian noise and degraded images are not satisfactory. While our method also achieves better segmentation results in degraded images and provides weights for uncertain pixels. This opens a new paradigm for SAM towards robust and reliable medical image segmentation.
§ DISCUSSION AND CONCLUSION
In this paper, we investigated the segmentation performance of SAM on fundus images. The results have shown that box prompt significantly improve the segmentation, but different box prompts lead to variations in predictions. The main method proposed in this paper, prompt augmentation, can help estimate the variations by aleatoric uncertainty and produce an uncertainty distribution map that highlights challenging areas for segmentation. The uncertainty map not only improves the segmentation process and final results but also enables the development of more advanced methods for segmenting fundus images. Moreover, the uncertainty map offers valuable guidance in areas where manual annotation is required. The feature of using the uncertainty distribution map for guiding segmentation and improving accuracy is noteworthy. Furthermore, the uncertainty map can help identify potential segmentation errors and support further analysis, providing useful information for clinicians.
IEEEtran
|
http://arxiv.org/abs/2307.05236v1 | 20230711130927 | Single Transverse Spin Asymmetry as a New Probe of SMEFT Dipole Operators | [
"Xin-Kai Wen",
"Bin Yan",
"Zhite Yu",
"C. -P. Yuan"
] | hep-ph | [
"hep-ph",
"hep-ex"
] | |
http://arxiv.org/abs/2307.07297v1 | 20230714122032 | Repeated Game Dynamics in Population Protocols | [
"Dan Alistarh",
"Krishnendu Chatterjee",
"Mehrdad Karrabi",
"John Lazarsfeld"
] | cs.DC | [
"cs.DC",
"cs.GT"
] |
Similarity-based Memory Enhanced Joint Entity and Relation Extraction
Witold Kościukiewicz1,20009-0001-0192-8850 Mateusz Wójcik 1,20009-0008-0547-9467 Tomasz Kajdanowicz20000-0002-8417-1012 Adam Gonczarek1
August 12, 2023
===========================================================================================================================================
empty
We initiate the study of repeated game dynamics in the population model, in which we are given a population of n nodes, each with its local strategy, which interact uniformly at random by playing multi-round, two-player games. After each game, the two participants receive rewards according to a given payoff matrix, and may update their local strategies depending on this outcome.
In this setting, we ask how the distribution of player strategies evolves with respect to the number of node interactions (time complexity), as well as the number of possible player states (space complexity), determining the stationary properties of such game dynamics.
Our main technical results analyze the behavior of a family of Repeated Prisoner's Dilemma dynamics in this model, for which we provide an exact characterization of the stationary distribution, and give bounds on convergence time and on the optimality gap of its expected rewards.
Our results follow from a new connection between Repeated Prisoner's Dilemma dynamics in a population, and a class of high-dimensional, weighted Ehrenfest random walks, which we analyze for the first time. The results highlight non-trivial trade-offs between the state complexity of each node's strategy, the convergence of the process, and the expected average reward of nodes in the population. Our approach opens the door towards the characterization of other natural evolutionary game dynamics in the population model.
§ INTRODUCTION
The emergence of complex global behavior from the
interactions of simple, computationally-limited
agents is a key topic of interest in distributed computing.
A standard setting is the population protocol model, in which a set of n agents,
modeled as simple, anonymous state machines, interact
randomly in pairs with the goal of joint computation
over the system's state.
Since its introduction by Angluin et al. <cit.>,
this model has been used to characterize the evolution of
several families of dynamics for solving fundamental tasks such as majority, e.g. <cit.> and leader election, e.g. <cit.>.
One key feature of the population model is allowing to characterize fine-grained notions of protocol convergence with respect to population size, total number of pair-wise interactions (time), and available per-node memory (space).
Specifically, the population model leads to fascinating trade-offs between the space and time complexity of relatively-simple local node dynamics, and their complex global convergence behavior, e.g. <cit.>.
In this paper, we are interested in a classic setting in game theory broadly defined as repeated games, in which players hold local strategies, and interact by playing pair-wise multi-round games.
Repeated games have been studied for decades in a variety of settings <cit.>,
leading to both well-known “folk theorems” <cit.>,
as well as to the study of evolutionary game dynamics for simple strategies <cit.>.
Yet, due to its complexity, the study of the dynamics of repeated games for finite populations has primarily been considered only via simulation in prior work <cit.>.
In this context, we initiate the study of repeated game dynamics
in population protocols.
At a high level, our model works as follows: Given a population of n nodes, each with its own local strategy, at each time step, a pair of nodes
is selected uniformly at random to interact,
and the chosen nodes play a multi-round, two-player game.
Each node follows a strategy
determined by its own local state, and at the conclusion of the game,
the nodes receive rewards according to a fixed payoff matrix.
The nodes can update their state and corresponding
strategy using a deterministic transition rule
that depends on the outcome of the previous interaction.
Given a payoff matrix and a dynamics that specifies
how nodes update their strategies, we investigate how the
distribution of strategies in the population evolves over time.
We study this question
for the classic repeated prisoner's dilemma (RPD) game <cit.>,
which is a multi-round variant of the classic prisoner's dilemma (PD).
At a high level, the general setting for the type of dynamics we will consider is as follows:
* We assume that the nodes in the population can have one of three
strategy types: Always-Cooperate (), Always-Defect (), and Generous-Tit-for-Tat ().
While the former two strategy types ( and ) are fixed thoughout the process, nodes playing the strategy can adjust their strategy, based on their direct interactions, via a local generosity parameter g ∈ [0, 1]. This leads to
the set of strategy options for players.
* Initially, the population contains
fractions α, β, and 1-(α+β)
of the strategy types , , and , respectively.
(Given that and nodes preserve their strategies,
these proportions are invariant over time).
We refer to such populations as (α, β) populations.
Under this setting, the basic question of interest is then the following:
Given an (α, β) population,
a set of strategy options , and a dynamics
for updating
such strategies,
how does the distribution over the
strategies evolve over time?
In this paper, we introduce a novel family
of repeated prisoner's dilemma dynamics,
and we provide quantitative answers to the above question.
Our analysis relies on a new connection between our family of distributed RPD
dynamics, and a class of weighted, high-dimensional
random walks, which generalize the classic
Ehrenfest diffusion process <cit.>.
While this process has been thoroughly analyzed in
the two-dimensional setting, e.g. <cit.>, we introduce and motivate its high-dimensional
variant for the first time,
and our main technical contribution lies in characterizing
its stationary distribution and mixing time.
In turn, this analysis allows us to upper bound the
optimality gap of the expected rewards
of our dynamics when executed in a population.
Specifically, we show that linearly increasing the number of
local generosity states k (“memory”) at each node
will result both in a linear decrease in the optimality gap,
but at most a linear increase in mixing time.
Before introducing our family of repeated game dynamics
and main results more formally, we specify
the game-theoretic context and components of our setting
in more detail.
§.§ Game-theoretic background
PD and RPD games In RPD, the two nodes start
by playing a single round of prisoner's dilemma.
At the end of each round, an additional round
may be played with (independent) probability δ;
otherwise, the game terminates.
We call δ the
continuation or restart probability.
In a single round of PD,
each player simultaneously chooses
to cooperate (C) or defect (D),
and the eponymous dilemma is that
each player's payoff-maximizing decision is to defect,
despite the fact that mutual cooperation leads
to a higher payoff.
This is captured in the standard PD payoff matrix,
which for simplicity we denote as a
reward vector :̌= [R, S, T, P]^⊤
over the four game states := {CC, CD, DC, DD}
that are defined by the ordered actions of the first and second nodes,
and where the entries T > R > P> S specify the reward of the first node.
The most important class of PD reward vectors are
donation games <cit.>, where
:̌= [b-c, -c, b, 0]^⊤ for b > c ≥ 0.
In RPD games, each node's total reward is then defined as
the sum of its payoffs over the individual rounds of the game.
RPD strategies A node's (possibly randomized) strategy
determines its action (C or D) at each individual round,
and a node's strategy can depend on the actions of its
opponent from previous rounds.
In particular, the exact behavior of the ,
, and strategies introduced above
are defined as follows. First,
players play C
at each round, while
players play D
at each round. Second, players have a generosity parameter g,
such that in round 1, play C with an initial cooperation probability
s_1 ∈ [0, 1), and D with probability (w.p.) 1-s_1.
In round r+1, w.p. (1-g)
the player plays the opponent's action from round r,
and w.p. g it plays C.
We denote this parameterized strategy by g.
Throughout, we also slightly abuse notation and write
to denote the set of all parameterized strategies.
We assume s_1
is the same for all nodes.
and
A classical strategy in RPD is
tit-for-tat () <cit.>,
which always repeats the opponent's previous action in the next round.
It can be shown that the strategy is resistant
against invasion, and can lead to the emergence of cooperation under
suitable parameter values <cit.>.
The main drawback of is lack of robustness:
even in the two-player sequential setting, in the presence of
noise or errors where a cooperative action may be replaced by defection,
a single error makes two players alternate between C and D,
and after two errors both players will choose to defect forever.
The key mechanism to deal with such errors is the introduction
of generosity <cit.>, which motivates the
class of strategies (as defined above) and are the focus of this paper.
§.§ Repeated games in a population
Using the game-theoretic components
introduced above, we now formalize our
exact problem setting, where
a population of nodes interact randomly in pairs
and play instances of RPD:
* We consider an (α, β) population
playing an RPD instance with reward vector $̌
and restart probabilityδ, and we consider
a set ofparameter values,
where|| = k ≥2.
Moreover, we assume a maximum generosity
parameter≤1such thatg ≤for allg ∈.[
Assuming such bounds on the generosity probability
in the strategy are standard in RPD settings
<cit.>.
]
*
A dynamicsfor updating
the parameters ofmnodes
is specified by transitions over the strategy types
of the ordered pair of interacting nodes, which are of the form:
g + →g' +
forg, g' ∈and∈:= {, , },
where the interacting nodes
are sampled uniformly at random from the population
at each time step.[
We refer to the first node in such interactions as
the initiator, and in this setting we
assume only that the initiator
ever updates its strategy following an interaction.
This type of one-way protocol is a standard
modeling assumption in the population
protocol literature, e.g.
<cit.>.
]
For eachj = 1, …, k, letz^t_jdenote the number ofnodes in the population
with thej'th generosity parameter inafter thet'th total interaction of the system.
Then we call^t = (z_1, …, z_k)the parameter count vector and^t := ∑_g_j ∈ g_j ·(z^t_j/m)the average generosity value
induced by the dynamics aftertsteps.
Then given a dynamicsfor this setting,
the primary objective is to characterize the stationary
and convergence properties of the processes{^t}and{^t}induced by.
§.§ Our contributions
Our families of dynamics
We introduce a family of RPD dynamics
for the setting above.
Given the maximum generosity parameterg,
and for eachk ≥2, ourk'th dynamics defines
a setofkequidistant points in the range[0, g].
Eachnode maintains a parameter from,
and the dynamics follows two transition types:
(a) after anodeuinteracts with
annode or a secondnode,uincreases its generosity parameter
to the next largest value in,
and (b) after anodeuinteracts
with annode,udecreases its
generosity parameter to the next smallest value in.
Based on the two transition types, we call these protocols
incremental-generosity-tuning dynamics, and
we abbreviate thek-th dynamics byk-IGT.
Defined formally:
[Incremental-Generosity-Tuning (IGT) Dynamics]
definitionkigtdef
Consider an (α, β) population
and an RPD game setting with maximum generosity parameter
g.
For any k ≥ 2, define the set of k points
^k = {g_1, …, g_k}
where each g_j = (j-1/k-1) ·.
Randomly initialize the parameter of
every node to some g ∈^k.
Then the k-IGT dynamics is the population
protocol that evolves for all j ∈{1, …, k}
according to the following
transitions over the strategy types of interacting nodes:
(i) g_j + ⟶ g_[j+1] +
(ii) g_j + ⟶ g_[j+1] +
(iii) g_j + ⟶ g_[j-1] + ,
where [j] denotes the truncation
of j to the range [1, k].
Figure <ref> shows an example
of how the parameter value of anode is updated
depending on the strategy type of its interaction partner.
Our results
Our main result characterizes
the stationary and mixing properties
of thek-IGT dynamics defined above.
For an(α, β)population and anyk ≥2,
we obtain the following:
0.9Result 1(see Theorem <ref>):
The stationary distribution of the sequence
{^t} induced by the k-IGT dynamics
is multinomial
with parameters m and (p_1, …, p_k),
where each p_j ∝ (1/β - 1)^j-1.
Moreover, for β≠ 1/2, the mixing time
of the dynamics is bounded by O(k n log n) total interactions,
and when β = 1/2, the dependence on k changes to k^2.
Intuitively, this shows that
after roughlyO(k n logn)total interactions
(forβ≠1/2),
thek-IGT dynamics induces a non-trivial
distribution overstrategies, with interesting
dependencies onβandk:
forβdecreasing andkincreasing, we expect the
average generosity parameter to be increasingly concentrated
toward, the maximum possible setting.
We obtain this result by reducing the evolution
of thek-IGT dynamics to a family of
high-dimensional, weighted Ehrenfest random walks,
which generalize the classic two-dimensional
Ehrenfest urn process <cit.>.
The stationary and mixing results we prove for these
new Ehrenfest processes may be of independent interest.
In addition, we characterize the optimality of
thek-IGT dynamics in the following mean-field regime:
we consider the expected payoff of anode
playing against a randomly chosen opponent from the(α, β)population, and assuming that allnodes
play using an average parameterg.
Then for the average generosity parameter value_kunder the stationary distribution of thek-IGT dynamics,
and for the class of donation game payoff matrices,
we show the following optimality gap:
0.9Result 2(see Theorem <ref>):
Letting g^⋆ denote the generosity parameter
that maximizes the expected payoff
in the mean-field regime,
then |g^⋆ - _k| ≤ O(1/k)
when the ratio β / (1-α-β) is
bounded above by a small constant fraction.
This again highlights an interesting tradeoff
between the size of theparameter spacek,
and on the optimality of the average generosity
under the stationary distribution of the dynamics.
The formal statement in Theorem <ref>
more precisely characterizes the constraint onβ/(1-α-β)needed for the convergence to hold,
and an interesting open question
is to determine whether there exists a family of dynamics that
converges to optimal under all regimes of this ratio.
§.§ Related work
Population protocol dynamics
The population model was originally introduced by
Angluin et al. <cit.> to
model computation in populations of passively mobile
agents (such as sensor networks or animal populations),
and has since found several other applications,
from chemical reaction networks <cit.>
to computing via synthetic DNA strands <cit.>.
On the theoretical side, an impressive amount of effort has
been invested in understanding the computational power of the
model, e.g. <cit.>, on analyzing fundamental
dynamics such as rumor spreading and averaging <cit.>, and on the complexity of core algorithmic tasks such
as majority (consensus) <cit.> and leader election <cit.>
in this model.
The latter direction has recently lead to tight bounds on the space
and task complexity of these tasks <cit.>.
In this context, our contributions are to design and analyze
a novel class of repeated game dynamics that lead to
interesting time and space tradeoffs.
Evolutionary game dynamics There is a huge literature on evolutionary game dynamics.
We briefly mention some key results and the relationship to our work.
The first approach is to consider evolutionary dynamics in an
infinite population with the aid of
differential equations (aka, the replicator dynamics)
<cit.>,
and the goal is to study the existence
and stability of equilibrium points.
The second approach is to consider evolutionary dynamics
in a finite population with a class of
strategies e.g., reactive strategies or memory-1 strategies.
In these approaches, the class of strategies
is uncountable and simulation results suggest which strategies successfully evolve in the simulation
of evolutionary dynamics <cit.>.
The third approach is to consider evolutionary dynamics on networks
but with only two strategy types
(and) <cit.>.
In contrast to the present paper, none of these works
focus on quantitative aspects related to the mixing time
(or convergence time) to the stationary behavior.
Other multi-player game settings
There is a large body of literature
on a multi-player game setting <cit.>,
in whichnplayers simultaneously
choose actions at each round, and receive
a payoff according to a reward function
that depends on the actions of all other players.
In this setting, extensive work has
been devoted to designing
local strategies that provably converge to
equilibria (over the space of all simultaneous player actions),
and to determine their corresponding rates of convergence (e.g.,
<cit.>).
This is in contrast to the setting of the present work,
where only a single random pair of nodes
interacts at each round, and thus the results
are not directly comparable.
An orthogonal line of work to ours previously
investigated game theoretic aspects of population
protocols <cit.>,
but the focus in these works is on understanding
the computational power of interaction rules
that correspond to symmetric games.
§ TECHNICAL OVERVIEW OF RESULTS
§.§ Preliminaries
Notation
We use the shorthand notation[k] = {1, …, k}for anyk ≥0, and we define the setΔ^m_k :=
{ (x_1,…, x_k) ∈ℕ^k :
∑_j ∈[k] x_j = m }.
For non-negative integersx,a, andb,
we write[x]_a^bto denotextruncated to the range[a, b].
For readability, whenaandbare clear from context,
we will (by slight abuse of notation) simply write[x].
Markov chains
We consider discrete-time Markov chains{^t}over a discrete spaceΩ,
with transition matrix: Ω×Ω→[0, 1].
Recall that: Ω→[0, 1]is a
stationary distribution of{^t}if= (where we interpret the
probability mass function (PMF) ofas row vector).
Recall also that any distribution: Ω→[0, 1]satisfying the detailed balance equations() (,) = () (, )for all, ∈Ωis a stationary distribution for the process.
Starting from^1 := for any∈Ω,
we let^t()denote the the distribution of^t(i.e., of the process aftertsteps), and
we writed(t) := max_∈Ω ^t() - _to denote the distance to stationarity (in total variation)
of the process aftertsteps, maximized over all initial states.
Then we define the mixing timeof{^t}as:= min{ t ≥0 : d(t) ≤1/4}.
We refer the reader to the text of
Levin and Peres <cit.>
for more background and preliminaries on Markov chains
and mixing times.
Multinomial distributions
We recall basic facts about multinomial distributions.
Form ≥1,k ≥2, and a sequence(p_1, …, p_k)such thatp_1 + …+ p_k = 1,
a distributionis
multinomial with parameters m and (p_1, … p_k)
if the PMF ofis given by
()
=
p_1^x_1 ·p_2^x_2 …p_k^x_k
·mx_1, …, x_k
for all= (x_1, …, x_k) ∈Δ^m_k,
where the multinomial coefficient is defined as
mx_1, …, x_k
=
m!/x_1! ·x_2! …x_k!
.
Writing= (ν_1, …, ν_k), it is known
that[ν_j] = m ·p_jfor allj ∈[k].
Whenk=2, thenis a binomial distribution,
and we can simply say thatis
binomial with parameters m and p_1.
§.§ Stationary and mixing properties of k-IGT dynamics
§.§.§ Analysis setup
Our main result (Theorem <ref>) characterizes
the stationary and mixing properties of the distribution ofstrategies under thek-IGT dynamics.
For this, letm := n·(1-α-β)denote the
number ofnodes in the population, and fixk ≥2.
Recall we define^t := (z^t_1, …, z^t_k) ∈Δ^m_kas the count vector specifying the number of nodes with
strategyg_iafter thet'th step,
and we study the Markov chain{^t}.
For this, we begin by specifying how the
transitions of thek-IGT dynamics map to
the transitions^t →^t+1:
recall from Definition <ref> that
following any (non-null) interaction, exactly onenode updates its parameter.
Then conditioned on an interaction at steptwhose initiator has strategyg_jfor somej ∈[k],
then the coordinates of^t+1can be specified by one of the following cases,
depending on the strategy of the sampled interaction partner:
2
*
If the second player has strategy or ,
then for each j ∈ [k]:
z^t+1_i =
z^t_i - 1 if i = j and j < k
z^t_i + 1 if i = j+1 and j < k
z^t_i otherwise.
*
If the second player has strategy ,
then for each coordinate i ∈ [k]:
z^t+1_i =
z^t_i - 1 if i = j and j > 1
z^t_i + 1 if i = j-1 and j > 1
z^t_i otherwise.
Given that the pair of interacting nodes are sampled
uniformly at random at each time step, this implies
that the update in (a) occurs with (unconditional) probability(z^t_j/n) ·(1-β), and the update in (b)
occurs with probability(z^t_j/n) ·β.
Then given^t, we can summarize
all transitions^t →^t+1(for^t+1 ≠^t) that occur with non-zero probability
as follows: for allj ∈[k-1],
^t+1 =
(z^t_1, …, z^t_j - 1, z^t_j+1 + 1, …, z^t_k)
w.p. z^t_jm· (1-α-β)(1-β)
and ^t+1 =
(z^t_1, …, z^t_j + 1, z^t_j+1 - 1, …, z^t_k)
w.p. z^t_j+1m· (1-α-β)β .
Observe that transition probabilities in (<ref>)
are normalized bym(using the definitionn = m / (1-α-β)),
and that the coefficients(1-α-β)(1-β)and(1-α-β)(β)are absolute constants
with respect to the coordinates of^t.
Thus we can view the process as
a special case of a more general class of Markov chains{^t}overΔ^m_k, whose transition
probabilities (up to the absolute constant coefficients)
are of the form in expression (<ref>).
We proceed to define and analyze this more general
set of processes, from which characterizing the
stationary and mixing properties of{^t}will follow.
§.§.§ High-dimensional, weighted Ehrenfest processes
We introduce and analyze a more general class
of random walks onΔ^m_k,
which we refer to as
high-dimensional, weighted Ehrenfest processes.
Defined formally:
[(k, a, b ,m)-Ehrenfest Process]
definitionehrenfestkd
Fix k ≥ 2, and a, b > 0 such that a+b ≤ 1.
Let {^t} be the Markov chain on Δ^m_k
with transition matrix : Δ^m_k →Δ^m_k,
where for all j ∈ [k-1] and
= (x_1, …, x_k) ∈Δ^m_k:
(, (x_1, …, x_j -1, x_j+1 + 1, …, x_k))
=
p^j, j+1_ :=
a ·x_j/m
(, (x_1, …, x_j + 1, x_j+1 - 1, …, x_k))
=
p^j+1, j_ :=
b ·x_j+1/m
(, )
=
p^_ :=
1 - (
∑_j=1^k-1
p^j, j+1_ + p^j+1, j_) ,
and (, ) = 0 for all other ∈Δ^m_k.
Then we call {^t} the (k, a, b, m)-Ehrenfest process.
Relationship to the two-urn Ehrenfest Process
Whenk=2anda=b=1/2, the process
reduces to the classical Ehrenfest Urn Process<cit.>
from statistical physics.
Here,mballs are distributed in two urns.
At each step, an urn is sampled proportionally to its load,
and with probability half, a ball from the sampled urn
is placed into the other urn.
The(k, a, b, m)-Ehrenfest process generalizes
this original setting to a weighted, high-dimensional regime:
we considermballs distributed over a sequence ofkurns,
and after sampling thej'th urn proportionally to its load,
a ball from urnjis placed into urn[j+1]with probabilitya,
and into urn[j-1]with probabilityb.
While the stationary and mixing behavior of
two-urn process (including several weighted variants)
is well-studied
<cit.>,
we give the first such analyses for the weighted,
high-dimensional analogs from Definition <ref>.
Deriving the stationary distributions
We exactly characterize the stationary distributions of(k, a, b, m)-Ehrenfest processes:
we show these distibutions
are multinomial with
parameters(p_1, …, p_k), andm, where
eachp_j ∝(a/b)^j-1.
Fork=2and 3, this is obtained by
viewing the process as a weighted random walk
on a graph with vertex setΔ^m_k,
and by solving the recurrences stemming from
the detailed balance equations
(these calculations are derived formally in
Sections <ref>
and <ref>).
For higher dimensions (i.e., generalk),
we use the form of the stationary PMFs fork=2and 3
as an Ansatz for the specifying and verifying (via the
detailed balance equations) the
stationary PMF.
Stated formally, we prove the following result,
the proof of which is given in
Section <ref>.
theoremehrenfestkdstationary
Fix a, b > 0 with a+b ≤ 1, and let λ := a/b.
For any k, m ≥ 2,
let {^t} be the (k, a, b, m)-Ehrenfest process,
and let : Δ^m_k → [0, 1]
be its stationary distribution.
Then is multinomial with parameters
m and (p_1, …, p_k), where
p_j :=
λ^(j-1)· x_j/∑_i=1^k λ^i-1
for all j ∈ [k].
Bounds on mixing times
Letandd(t)denote the
mixing time and distance to stationarity
(as defined in Section <ref>)
of the(k, a, b, m)-Ehrenfest process.
To derive bounds on,
we introduce the following coupling:
first, let{X_t}and{Y_t}be random walks overΩ= {1, …, k}^m.
At timet, we sample a coordinatei ∈[m]uniformly at random,
and simultaneously increment or decrement thei'th coordinate
of bothX_tandY_t(with values truncated to[k])
with probabilityaandb, respectively.
It is straightforward to see that
the vector of counts of each valuej ∈[k]in bothX_tandY_tevolve as(k, a, b, m)-Ehrenfest processes. Then using the
standard relationship between the coupling
time of{(X_t, Y_t)}(i.e., the first timetwhenX_t = Y_t) and
mixing times <cit.>,
it suffices to probabilistically bound the
coupling time of the joint process to derive
the following bound on:
theoremehrenfestkdmixing
Fix a, b > 0 with a+b ≤ 1,
and k, m ≥ 2.
Let be the mixing time of
the (k, a, b, m)-Ehrenfest process.
Then
= O(min{k/|a-b|, k^2 }· m log m )
when a ≠ b
O( k^2 · m log m )
when a = b .
We bound the coupling time
of(X_t, Y_t)by estimating the time to coalesce
each of themcoordinates of the process, and this
reduces to bounding the expected absorption times
ofmindependent (possibly) biased random walks on{-k, …, k}(which necessitates the
case distinction betweena≠banda=b).
The full coupling details and proof of the theorem are developed in
Section <ref>,
which requires a more careful coupling analysis
compared to the original two-urn process.
Note also thatO(m logm)is a known
lower bound on the mixing time of the original
two-urn process <cit.>,
which means the dependence onmin Theorem <ref>
is also tight in general.
We suspect the linear dependence onkis also optimal,
but we leave obtaining such a lower bound as an open problem.
Note also that the original two-urn process
is known to exhibit a cut-off phenomenon
<cit.>,
in which the distance to stationarity of the process
sharply decays precisely at around1/2 ·n lognsteps
<cit.>.
Investigating this phenomenon for the general(k, a,b, m)process (and obtaining such exact cutoff constants
in terms ofaandb)
is an interesting line of future work.
§.§.§ Combining the pieces
By combining the arguments of Sections <ref>
and <ref>,
we can reduce the analysis of thek-IGT dynamics
to that of ak-dimensional, weighted Ehrenfest process and
formally state our main results.
Specifically, based on the transition probabilities
in (<ref>), for anyk ≥2,
the sequence{_t}induced by thek-IGT dynamics is
a(k, a, b, m)-Ehrenfest process, wherea := (1-α-β) (1-β),b := (1-α-β) β,
andm = (1-α-β) n.
Then as a direct consequence of Theorems <ref>
and <ref>, we have the following
main result characterizing the stationary and convergence
behavior of thek-IGT dynamics:
Fix k ≥ 2, and consider the vectors {^t}
induced by the k-IGT dynamics
on an (α, β) population and an RPD game setting
with maximum generosity parameter g.
Then {^t} converges
to a multinomial stationary distribution with parameters
m and (p_1, …, p_k), where
p_j =
(1/β- 1)^(j-1)/∑_i=1^k (1/β-1)^(i-1)
for each j ∈ [k].
Moreover the mixing time of {^t} is bounded by
O(min{k/|1-2β|, k^2}· n log n )
when β≠1/2,
and by O(k^2 · n log n ) when β = 1/2.
Observe that the mixing time of the process
speeds up in regimes whereβis bounded away
from half (i.e., the number ofnodes is sufficiently small or sufficiently large).
Similarly, the mean of the stationary
distribution[] = ([π_1], …, [π_k])grows increasingly less uniform over[k]in this regime ofβ.
In particular, given thatis a multinomial
distribution, it follows that[π_j] = m ·p_jfor eachj ∈[k].
Thus forβ< 1/2, we expect
the largest generosity parameterg_k ∈to have the greatest adoption amongnodes
aftermany steps, and this mass
increases asβgrows smaller, and
as the size of the parameter spacekincreases.
Average stationarity generosity
Given the set of generosity values^k = {g_1, …, g_k}and any= (z_1, …, z_k) ∈Δ^m_k,
we define the average generosity value
specified byas1/m ∑_j∈[k] g_j ·z_j.
Then the stationary distributionfrom Theorem <ref> allows us to
derive an average stationary generosity
value_kfor thek-IGT dynamics, which we define
as the average generosity value with respect to:= [].
We derive this value for allk ≥2in
the following proposition:
propositionaveragegen
Fix k ≥ 2, and let = (π_1, …, π_k)
denote the stationary distribution of the
k-IGT dynamics from Theorem <ref>
on an (α, β) population with
maximum generosity parameter .
Let ^k be the set of parameter values from Definition <ref>.
Define g_k := (1/m) ∑_j ∈ [k] g_j ·[π_j]
as the average stationary generosity of the dynamics,
and let λ := (1-β)/β.
Then _k = /2 for β = 1/2, and
_k
=
·(
λ^k/λ^k - 1 -
(1/k-1)
(λ/λ-1)
(λ^k-1-1/λ^k -1)
)
for β≠ 1/2.
Roughly speaking, Proposition <ref>
shows that_k ≈·(1 -β/(1-2β)k )whenβis bounded below1/2,
and_k ≈·(1-β/(2β-1)k )whenβis bounded above1/2.
Thus when the fraction ofnodes is sufficiently small,
the average stationary generosity approaches
the maximum generosity parametergat a rate ofO(1/k), and it approaches0at this same rate otherwise.
This again highlights the tradeoffs between
the size of the parameter spacek, and the resulting
levels of generosity induced by the dynamics.
The proof of the proposition is given in Section <ref>.
§.§ Characterizing the optimality of k-IGT dynamics
§.§.§ Expected payoffs in repeated PD games
We characterize several game-theoretic properties
of thek-IGT dynamics.
First, for a pair of strategies_1, _2 ∈:= {, , },
let_1_2denote the expected payoff
in an RPD game (over the randomness of the strategies and repeated rounds)
for a node with strategy_1against an opponent with
strategy_2. In Section <ref>,
we definefmore precisely,
and we computegfornodes
against opponents with strategy∈.
In particular, eachgrequires specifying the Markov chain over the
repeated game states (which changes with),
and then tracking how the distribution over
thenode's single-round payoffs evolves over rounds.
Bridging k-IGT and introspection dynamics
Under mild constraints on the reward vector$̌
and maximum generosity parameter ,
we show that the transition rules
of the k-IGT dynamics are
locally optimal in the following sense:
under any transition rule of Definition <ref>,
the expected payoff g
will never decrease had the node
used the updated parameter value
specified by the transition rules against
its previous opponent with strategy .
In particular, this bridges the relationship
between the k-IGT dynamics and
the classic concept of introspection dynamics
with local search
from evolutionary games <cit.>,
where an individual explores the local neighborhood
of its strategy space to
adopt a new strategy that would have performed better.
Formally, we prove:
propositionparams
Consider an RPD game setting consisting of
(a) a reward vector
=̌ [R, T, S, P]^⊤ with R+P ≤ T+S,
(b) a restart probability
δ > T-RR-S,
(c) a maximum generosity parameter
g < 1 - T-R/δ(R - S),
and (d) an initial cooperation
probability s_1 ∈ [0, 1).
Then for all g, g' ∈ [0, g]
such that g < g',
the following three statements hold:
(i) gg” < g'g” for all g”∈ [0, g]
(ii) g ≤ g'
(iii) g > g'.
Note that statement (ii) of the proposition
implies that only the expected payoff f
for nodes playing strategy g
against is non-decreasing with g,
whereas statements (i) and (iii) give
strictly increasing inequalities with respect to g.
We remark that the k-IGT transitions could thus be adjusted
such that nodes only increase their parameter
following an interaction with a second node:
this would ensure strictly increasing
expected payoff relationships for each transition type,
but at the expense of lower average stationary generosity values
(in the sense of Proposition <ref>,
and depending on the ratio of to nodes).
Observe also that
condition (a) in the proposition
is always satisfied by donation game
reward vectors.
§.§.§ Optimality of the average stationary
generosity in the mean-field setting
Using the formulations of the previous section,
we consider the expectation
of , when the opposing
strategy ∈ is drawn from
an (α, β) population
(and thus the randomness is over both the rounds of the RPD game,
and the interaction sampling in the population).
In particular, using the notion of the average generosity
from Section <ref>,
we consider a mean-field approximation
(similar in spirit to, e.g., <cit.>),
in which
all nodes have the average
generosity parameter value.
For this, recall that m = n(1-α-β).
We will write g as shorthand
for g,
and for any average generosity value g ∈ [0, g],
we define
F(g, α, β)
:= α·g
+ β·g
+ mn·gg .
Here, F(g, α, β) exactly captures the expected
RPD payoff of a node in an (α, β) population,
and in this mean-field approximation with
average generosity g: at its next interaction,
a node with parameter g
plays against an or node with
probability α or β, respectively,
and against a second node with the
average parameter g
with probability m/n.
Let g^⋆ := _g ∈ [0, ] F(g,α, β)
denote the average generosity value
that maximizes this expected payoff.
Then letting _k denote the
average stationary generosity value
of the k-IGT dynamics from Propsition <ref>,
we characterize the convergence of
_k → g^⋆ with k for
the special class of donation game payoff matrices
in the following theorem:
theoremmeanfielderror
Consider an (α, β) population,
and a donation game RPD setting with
reward vector =̌ [b-c, -c, b, 0]^⊤,
a maximum generosity parameter
satisfying Proposition <ref>,
and an initial cooperation probability s_1 = 1/2.
For any k ≥ 2, let _k
denote the average stationary generosity value
of the k-IGT dynamics
as in Proposition <ref>.
Define g^⋆ := _g ∈ [0, ] F(g, α, β),
and let ϕ := β n/m.
Then for λ := 1-β/β > 1
and
ϕ≤(b-c)(1-δ)/2c(1-δ(1-))^2,
we have
| g^⋆ - g_k |
≤β/(1-2β)(k-1).
Roughly speaking, the theorem shows that
when the ratio of nodes to nodes ϕ
is sufficiently small,
and assuming the k-IGT dynamics has converged
to its stationary distribution,
then the average generosity value _k
will be close to optimal
(i.e., close to the maximizing parameter value g^⋆),
and that this gap vanishes
at a rate of roughly O(1/k).
The proof of the theorem (developed in Section <ref>)
first uses convexity arguments to characterize
the maximizer of F(g, α, β)
with respect to ϕ,
and then uses the average stationary generosity
calculation from Proposition <ref>,
which translates into a convergence rate between
_k and g.
We note also that the
same convergence rate holds
for other constant settings of s_1,
but under slightly different
constants constraining ϕ.
Remarks on the formulation of F(g, α, β)
Note that in the formulation of
the expected payoff F(g, α, β)
from expression (<ref>), our
use of the term “mean-field” in specifying F
is done mainly in an informal sense.
In the related (but distinct) setting of
mean-field games <cit.>,
one approximates the global behavior of
n locally interacting nodes by assuming
that an individual node plays against the averaged
behavior of the population, i.e., by
assuming the opponent is a representative
node drawn from some approximate, global distribution over strategies.
Note that we could also imagine formulating
an alternative version of such an expected payoff
by considering the more granular distribution
of nodes over the parameters ^k
that is specified by the stationary distribution of the
k-IGT dynamics.
However, the resulting calculation
expressing this expected payoff is difficult to control
analytically,
which arises from the complex nature of the
function f(_1, _2) when both _1, _2
are strategies with non-equal parameters.
Nevertheless, we provide several numerical simulations
in Section <ref> suggesting that
the values of F using the mean-field assumptions
are similar to the corresponding values
of the more granular computation, even for small m.
§ DISCUSSION
In this work, we introduced a family of dynamics for
repeated prisoner's dilemma games in the population protocol
model, and we characterized their stationary and mixing
properties by analyzing a new class of high-dimensional
Ehrenfest processes.
Our work opens the door for several future directions:
first at a broad level, it would be interesting to study
dynamics in this setting for other common repeated games
from evolutionary dynamics, such as
stag hunt and hawk dove <cit.>,
and to investigate the resulting time-space tradeoffs.
At a more technical level, it remains open to derive
lower bounds that establish the optimal dependence on k in
the mixing times of the (k, a, b, m)-Ehrenfest process.
Moreover, studying the cutoff behavior of these processes
(and establishing the exact cutoff constants in terms of k, a, b)
is left as open.
In the following sections, we provide more technical
details on our results.
§ DETAILS ON HIGH-DIMENSIONAL, WEIGHTED EHRENFEST PROCESSES
In this section, we provide more
details on the (k, a, b, m)-Ehrenfest processes
introduced in Section <ref>.
In particular, in
Sections <ref>
and Sections <ref>
we derive the stationary distributions
for the process when k=2 and k=3, respectively.
We use the form of the PMFs of these distributions
to derive expressions for the PMFs
for larger (general) k, and
we verify their stationarity in the proof of
Theorem <ref> in
Section <ref>.
In Section <ref>, we
develop the proof of Theorem <ref>,
which gives a bound on the mixing time of the process,
which is based on the coupling introduced in
Section <ref>.
For convenience, we restate the definition of the process here:
*
§.§ Deriving the stationary distribution for k=2
In this section we derive the PMF of the
stationary distribution of the (2, a, b, m)-Ehrenfest process,
for which we use the following, more convenient notation:
Setup and notation
Let {^t} denote the process, and recall
each ^t ∈Δ^m_k. Let : Δ^m_2 → [0, 1]
denote its stationary distribution.
Based on the transitions from Definition <ref>,
it is easy to see that {^t} is irreducible
and aperiodic, and thus is the unique
stationary distribution of the process.
In order to specify the PMF of , we
restrict our view of the process {x^t}
to its first coordinate i ∈ := {0, 1, …, m}.
In particular, we define the function
π: → [0, 1] such that
π(i) = ((i, m-i)) for all i ∈.
In other words, we project the set Δ^m_2
onto the line = {0, …, m}.
We can then define the transition matix
P: ×→ [0, 1] over pairs of
points in that is induced by
the matrix for the process {^t}.
Thus for any x, y ∈, the
entries of P are given by:
P(x, y)
= if
y = [x+1]:
b m-xm
if
y = [x-1]:
a xm
if
y = x:
1 - b + (b-a)xm
otherwise:
0
.
Thus matrix P specifies the
transition probabilities over the number line
for the first coordinate of the process {^t}
(and this entirely specifies the process
given that each ^t ∈Δ^m_2).
Then in the following proposition,
we derive the PMF of π by solving the
system of equations arising from the detailed
balance equations. The PMF of is
then recovered immediately based on
the one-to-one relationship between π and .
Fix a, b > 0 with a + b ≤ 1.
Let P: ×→ [0, 1]
denote the transition matrix induced
by the (2, a, b, m)-Ehrenfest process
(as defined in expression (<ref>))
and let π: → [0, 1]
be the stationary distribution of P.
Letting λ := a/b, we have for all i ∈:
π(i)
= λ^m-i/(1+λ)^m·mi .
We solve for the π based on the
system of equations stemming from the detailed
balance equations
π(x) · P(x, y)
= π(y) · P(y, x)
,
which must hold for all x, y ∈.
Then for any i∈, we can recursively
substitute the equations along the path
i, i-1, …, 1, 0 to find
π(i)
= π(0) ·
P(0, 1) · P(1, 2) ⋯ P(i-1, i)
/
P(i, i-1)· P(i-1, i-2) ⋯ P(1, 0)
.
It then follows from the entries of P
defined in expression (<ref>) that
we can write
π(i)
= π(0)
·
bm · b(m-1) ⋯ b(m-i+1)
/
ai · a(i-1) ⋯ a
= π(0)
·(b/a)^i ·m!/(m-i)! · i! .
Given that ∑_i ∈π(i) = 1
(as π is a distribution),
and recalling that λ = a/b, we then have
1 = ∑_i=0^m
π(0)
·(1/λ)^i
·m!/(m-i)! · i! = π(0)
·(
1 + 1/λ)^m
= π(0)
·(1+λ/λ)^m
,
where the second equality is due to the binomial theorem.
It follows that
π (0) = λ^m / (1 + λ)^m,
and substituting this into
expression (<ref>) for each
i ∈ yields the statement of the proposition.
Given the relationship between π and ,
it follows that for any = (x_1, x_2) ∈Δ^m_k:
()
= λ^x_2/(1+λ)^m·mx_1 .
In other words, the stationary distribution
of the (2, a, b, m)-Ehrenfest process is
a binomial distribution with parameters
m and p = 1/(1 + λ).
§.§ Deriving the stationary distribution for k=3
We now derive the stationary distribution
for the (3, a, b, m)-Ehrenfest process.
Our approach is similar in spirit to
the k=2 case, but we require a more careful
technique in order to solve for the stationary PMF
in this higher-dimensional regime.
For this, we begin by introducing a more
convenient notation (similar to the notation
used for the k=2 case).
Setup and notation
Let {^t} denote the (3, a, b, m)-Ehrenfest process,
and recall that its stationary distribution is defined
over the space Δ^m_3.
It is again easy to see that the Markov chain {^t} is irreducible,
and thus is its unique stationary distribution.
Similar to the k=2, when deriving the PMF for ,
it is more convenient to work over the (equivalent) set
of points that are defined explicitly in a (k-1)-dimensional space.
:= {ij : 0 ≤ i, j ≤ m and i+j ≤ m
} ,
where we use the one-to-one mapping
ij↦ (i, j, m-i-j) ∈Δ^m_3
for all ij∈.
Roughly speaking, we can view
as simply the set Δ^m_3 embedded onto
the two-dimensional plane.
Then we define π : → [0, 1]
as the PMF over such that
for all (i, j, k) ∈Δ^m_3,
we have πij = ((i, j, k)).
Moreover, we let P: ×→ [0, 1] denote
the probability transition function over pairs of
points in that is induced by the transition matrix
: Δ^m_3 ×Δ^m_3 → [0, 1] of
the process {^t}. Then for any (x_1, x_2), (y_1, y_2) ∈,
the entries of P can be summarized as
P(
x_1x_2,
y_1y_2)
= if y_1y_2 = [x_1-1][x_2 + 1]:
ax_1m
if y_1y_2 = [x_1 + 1][x_2 - 1]:
bx_2m
if y_1y_2 = x_1[x_2 - 1]:
ax_2m
if y_1y_2 = x_1[x_2 + 1]:
b(1- x_1 + x_2m)
if y_1y_2 = x_1x_2:
1- b(1- x_1m)
- a x_1 + x_2m
otherwise:
0
.
For concreteness, Figure <ref>
shows an example of the space
and the transitions with non-zero probability under P
when k=3 and m=3.
We then prove the following proposition
specifying the PMF of π:
Fix a, b > 0 with a + b ≤ 1,
let P: ×→ [0, 1] denote
the transition matrix induced by the
(3, a, b, m)-Ehrenfest process (as defined
in expression (<ref>)),
and let π: → [0, 1] be stationary distribution
of P. Then letting λ := a/b, we have
πij = λ^2(m-i-j)λ^j/(1+ λ + λ^2)^m·mi, j, (m-i-j)
for all x = ij∈.
It suffices to solve for π from the system of equations
arising from the detailed balance equations:
πx_1y_1·x_1y_1x_2y_2 = πx_2y_2·x_2y_2x_1y_1 ,
which must hold for all pairs (x_1,y_1), (x_2, y_2) ∈.
Using the recursive structure of the transition
probabilities P defined in expression (<ref>),
we can express each πij in terms
of π0m for all ij∈
using the following formulations:
* For any j ∈{0, …, m}:
π0j = π0m·0m0m-1·0m-10m-2⋯0j+10j/0j0j+1·0j+10j+2⋯0m-10m .
* For any i ∈{1, …, m} and j ∈{0, …, m}:
πij = π0j+i·0j+i1j+i-1·1j+i-12j+i-2⋯i-1j+1ij/iji-1j+1·i-1j+1i-2j+2⋯1j+1-i0j+i ,
from which it follows that πij
can be expressed in terms of π0m
using the expression for π0i+j from (a).
Note that by viewing the transitions specified by P
as directed edges over the vertex set
(i.e., as in the example of Figure <ref>)
the formulation in (a) follows by
recursively substituting the expressions for
π0j from the detailed balance equation along
the path from 0j to 0m using
only vertical edges.
Similarly, the formulation in (b) follows by considering
such paths from πij to π0i+j
using only diagonal edges.
Then using the entries of P defined in (<ref>),
we find for part (a) that, for any j ∈{0, …, m}:
π0j = π0m·
am
·
a(m-1)
⋯
a(j+1)
/
b(m-j)
·
b(m-(j+1))
⋯
b
= π0m·(a/b)^m-j·m!/j! · (m-j)! .
Similarly, for part (b), we find for any i ∈{1, …, m}
and j ∈{0, …, m}:
πij = π0j+i·
b(j+i)
·
b(j+i-1)
⋯
b(j+1)
/
ai
·
a(i-1)
⋯
a
= π0j+i·(b/a)^i·(j+i)!/j! · i! .
Then substituting the expression for π0j+i
from (<ref>), we can further simplify and write
πij = π0m·(a/b)^m-(i+j)·m!/(j+i)! · (m-(i+j))!·(b/a)^i·(j+i)!/j! · i!
= π0m·(a/b)^m-2i-j·m!/(m-i-j)! · i! · j! .
Now using the fact that π is a distribution
and thus ∑_x ∈π(x) = 1,
and recalling that λ = (a/b), we can
use the expressions (<ref>)
and (<ref>) to write:
1
= π0m
+
∑_j=0^m-1(
π0j
+
∑_i=1: i+j ≤ m^m
πij)
= π0m
+
∑_j=0^m-1(
π0m·λ^m-j·m!j!· (m-j)!
+
∑_i=1: i+j ≤ m^m
π0m·λ^m-2i-j·m!(m-i-j)!· i! · j!)
= ∑_(i,j) ∈π0m·λ^m-2i-j·m!(m-i-j)! · i! · j! .
Now observe by the multinomial theorem that we can write
∑_(i,j) ∈ λ^m-2i-j·m!/(m-i-j)! · i! · j! = (
1 + λ + λ^2/λ)^m
.
Then it follows from expression (<ref>) that
π0m = λ^m/(1+ λ + λ^2)^m .
Finally, substituting this expression for π0m
back into equations (<ref>)
and (<ref>) then
specifies the mass πij for general ij∈,
which concludes the proof.
Using the relationship between π and mentioned
earlier, it follows immediately from
Proposition <ref>
that for the (3, a, b, m)-Ehrenfest process,
the PMF of is specified by
()
= λ^2x_3 + x_2/(1 + λ + λ^2)^m·mx_1, x_2, x_3
for all = (x_1, x_2, x_3) ∈Δ^m_k,
where λ = (a/b).
In other words, is a
multinomial distribution with parameters m and
(p_1, p_2, p_3), where
p_1 := 1/1+ λ + λ^2 ,
p_2 := λ/1+ λ + λ^2 ,
and
p_3 := λ^2/1+ λ + λ^2 .
§.§ Verifying the stationary distribution for general k
We use the stationary PMFs for the (2, a, b, m) and
(3, a, b, m)-Ehrenfest processes found in
Sections <ref>
and <ref> as
an Ansatz for specifying the PMF of the stationary
distribution for general k. We can then prove
stationarity by verifying this PMF satisfies the
the detailed balance equations.
In particular, we show that the PMF of
for the (k, a, b, m)-Ehrenfest process is of the form:
()
= λ^(k-1)x_k + (k-2)x_k-1+ … + x_2/
(1 + λ + λ^2 + … + λ^k-1)^m
·mx_1, x_2, …, x_k
for all = (x_1, …, x_k) ∈Δ^k_m,
and where λ = a/b.
This culiminates in Theorem <ref>
(introduced in Section <ref>),
which we restate here for convenience:
*
We verify that satisfies the detailed balance equations
π() · P(, )
= π() · P(, )
for all , ∈Δ^m_k.
Given the description of the non-zero transition probabilities
of from Definition <ref>
(in particular, defined in terms of the
variables p_^j, j+1 and p_^j+1, j
for any ∈Δ^m_k and j ∈ [k-1]),
it suffices by symmetry to
verify expression (<ref>) only for transitions between
pairs of states
(x_1, …, x_j, x_j+1, …, x_k)
and
(x_1, …, x_j - 1, x_j+1 + 1, …, x_k)
for j = 0, …, k-1. In other words,
letting := (x_1, …, x_k), we wish to verify that
( (x_1, …, x_j, x_j+1, …, x_k) )
·
p_^j,j+1 = ( (x_1, …, x_j - 1, x_j+1 + 1, …, x_k))
· p_^j+1, j .
For this, first recall in the expression of π from
the theorem statement that the multinomial coefficient is given by
mx_1, …, x_k = m!/x_1 ! ⋯ x_k! .
Thus we can cancel out all matching terms to show that
verifying expression (<ref>) reduces to verifying
λ^(j· x_j+1 + (j-1)· x_j)/
x_j ! · x_j+1 !
·
p_^j,j+1 = λ^(j·(x_j+1 + 1) + (j-1) · (x_j - 1))/
(x_j+1 + 1)! · (x_j - 1)!
·
p_^j+1, j .
For the left-hand side of (<ref>), we use the
definition of p_^j, j+1 and the fact that λ = a/b
to simplify and write
(LHS of (<ref>))
= λ^(j· x_j+1 + (j-1)· x_j)/
x_j ! · x_j+1 !
·a · x_j/m = a^(j · x_j+1 + (j-1)· x_j + 1)/
b^(j· x_j+1 + (j-1)· x_j)·1/m · (x_j - 1)! · (x_j+1)! .
Here, the final inequality uses the fact that
x_j / (x_j! · x_j+1!) = 1 / ((x_j - 1)!· x_j+1!).
For the right-hand side of (<ref>), we can
similarly simplify and write
(RHS of (<ref>))
= λ^(j·(x_j+1 + 1) + (j-1) · (x_j - 1))/
(x_j+1 + 1)! · (x_j - 1)!
·b · (x_j+1 + 1)/m
= λ^(j· x_j+1 + (j-1) · x_j +1)/
x_j+1! · (x_j - 1)!·b/m = a^(j· x_j+1 +(j-1)· x_j + 1)/
b^(j · x_j+1 + (j-1)· x_j)·1/m· x_j+1! · (x_j - 1)! .
Observing that the left and right-hand sides of
expression (<ref>) for any j = 0, …, k-1
thus establishes that π is the unique
stationary distribution for the process.
Now for each j ∈ [k] we can define
p_j
:= λ^(j-1)/∑_j=1^k λ^(j-1) .
Clearly ∑_i ∈ [k] p_i = 1, and we can rewrite
the PMF of as
() =
p_1^x_1⋯ p_k^x_k·mx_1, …, x_k ,
for any = (x_1, …, x_k) ∈Δ_k^m.
In other words, is a multinomial distribution
with parameters m and (p_1, …, p_k).
§.§ Bounds on mixing times
In this section, we develop the proof of
Theorem <ref>,
which bounds the mixing time of the
(k, a, b, m)-Ehrenfest process.
The theorem is restated for convenience:
*
Coupling setup
To prove the theorem, recall from the
overview in Section <ref>
that we introduce a coupling {(X_t, Y_t)}
over the space
Ω×Ω = {1, …, k}^m ×{1, …, k}^m,
where the transition probabilities over
the vectors of counts of each element
j ∈ [k] for each of {X_t} and {Y_t}
is a (k, a, b, m)-Ehrenfest process.
We more formally specify the details
of this coupling:
* At time t=0, assume
X_0 = x and Y_0 = y, for some x, y ∈Ω.
* At each time t ≥ 1, sample i ∈ [m] uniformly
at random.
* Letting X^i_t and Y^i_t denote the
i'th coordinates of X_t and Y_t, respectively, set:
(X^i_t+1, Y^i_t+1)
= ([X^i_t + 1], [Y^i_t + 1] )
with probability a
([X^i_t - 1], [Y^i_t - 1] )
with probability b
(X^i_t, Y^i_t)
otherwise
.
For j ∈{1, …, k},
let x^t_j and y^t_j denote the number of coordinates
in X_t (resp., Y_t), where X^i_t = j (resp., Y^i_t = j).
Then letting ^t = (x^t_1, … , x^t_k)
and ^t = (y^t_k, … , y^t_k),
it is easy to see that {^t} and {^t}
are both (k, a, b, m)-Ehrenfest processes
under the randomness of the coupling,
and thus the mixing time of {^t} (resp., {^t})
is equivalent to that of {X_t} (resp., {Y_t}).[
Note the difference in the time-indexing location
between the processes {X_t},
{Y_t}, and {^t}, {^t}.
]
Then initialized at X_0 = x and Y_0 = y for
x, y ∈Ω,
we let denote the coupling time
of the process, which is the first
time t ≥ 0 such that X^i_t = Y^i_t
for all coordinates i ∈ [m]. Formally:
= min{
t ≥ 0 : X_s = Y_s for all s ≥ t} ,
and we use the standard fact <cit.>
that
d(t)
≤ max_x, y ∈Ω ( > t ) .
Thus our goal is to derive tail bounds on ,
and for this, we express
in terms of its coordinate-wise
coalescing times: specifically we define
^i
= min{
t ≥ 0 : X^i_s = Y^i_s for all s ≥ t}
for each i ∈ [m],
and it follows that we can write
=
max_i ∈ [m] ^i.
Thus our strategy in proving
Theorem <ref> is to first
bound each ^i in expectation,
and to then use standard machinery to derive a
tail bound on .
We proceed now to develop these steps.
Bounding ^i in expectation
Observe from expression (<ref>)
that conditioned on the set of steps
during which coordinate i ∈ [m] is selected,
{X^i_t} and {Y^i_t} each
behave as a biased random walk (with reflecting barriers)
on {1, …, k}.
In particular, given the shared randomness
in {(X^i_t, Y^i_t)}, observe that the
distance |X^i_t - Y^i_t| is non-increasing with t,
and thus the two coordinate-wise random walks must
coalesce at either 1 or k.
Thus for each coordinate i ∈ [m],
we let ^i denote the number of times
coordinate i must be sampled in the coupling
of {(X_t, Y_t)} until X^i_t = Y^i_t.
Then letting S^i_j denote the waiting time between
when coordinate i is sampled for the (j-1)'th
and j'th times in the coupling (X_t, Y_t), it follows that
[
^i
]
= [
∑_j=1^^i S^i_j
]
= [^i] · m ,
where in the final equality we use the
fact that each S^i_j is an independent geometric
random variable with parameter 1/m.
By proving the following uniform bound on [^i],
we then easily obtain a bound on [^i]
as a consequence of expression (<ref>):
For any i ∈ [m], consider the coordinate-wise
coupling {(X^i_t, Y^i_t)} on {1, …, k}
with parameters a, b > 0 where a + b ≤ 1.
Then
[^i]
≤ min{k/|a-b|, k^2
} when a ≠ b
k^2
when a=b .
The proof of Lemma <ref> follows
by reducing the coalescing time of two
coupled, biased random walks on {1, …, k}
to the absorbption time of a single, biased
random walk on {-k, …, k}.
For this, assume without loss of generality that
X^i_0 = z and Y^i_0 = w for 1 ≤ z < w ≤ k.
For simplicity, we assume all time indices henceforth
are conditioned on coordinate i being selected
in the parent, m-coordinate coupling.
Now we define a third random walk {Z_t} on {-k, …, k}
that increments and decrements with probability
a and b using the same shared randomness of (X^i_t, Y^i_t).
Specifically, we have:
( X^i_t+1, Y^i_t+1, Z_t+1)
= ([X^i_t + 1], [Y^i_t + 1], Z_t + 1 )
with probability a
([X^i_t - 1], [Y^i_t - 1], Z_t - 1)
with probability b
(X^i_t, Y^i_t, Z^i_t )
otherwise
.
Now initialize Z_t = z, and let
denote the first time Z_t is absorbed at either -k or k, i.e.:
= min{ t ≥ 0 : Z_t ∈{-k, k}} .
In the following proposition, we show that ^i ≤.
For any k ≥ 2, and any i ∈ [m], consider the
coupling {(X^i_t, Y^i_t, Z_t)} as
defined in expression (<ref>)
with parameters a, b > 0 and a + b ≤ 1.
Assume that X^i_0 = z < w = Y^i_0, for
some z, w ∈ [k] and that Z_0 = 0 < z.
Then ^i ≤.
Observe from the transition probabilities
of {X^i_t} and {Y^i_t} that
at time ^i, we must have
X^i_ = Y^i_∈{1, k}.
Notice also by the initialization choice of {Z_t}
that Z_t < X^i_t ≤ Y^i_t for all t ≥ 0.
Thus if Z_ = k,
then it must be the case that
≥^i, and that X^i_t = k
at time t = ^i.
On the other hand, given that the randomness
between the three processes is shared,
observe that if Z_t = j < 0 for some
t ≥ 0, then Y_t - X_t ≤ (w-z) - |j|.
This is due to the fact that starting from
Z_t = 0, every instance of Z_t decreasing to
a smaller value j for the first time
corresponds to the process X^i_t remaining at 1,
and the process Y^i_t decrementing by 1.
Thus if Z_ = -k, then we must have
Y^i_ - X^i_ ≤ (w-z) - k
≤ 0 ,
where the final inequality follows from the fact
that w-z ≤ k.
Since 1 ≤ X^i_t ≤ Y^i_t for all t by definition,
this implies that
Y^i_ = X^i_ = 1.
Thus if Z_ = -k, we again conclude
that > ^i.
Given Proposition <ref>, we now
proceed to bound [] for the process
{Z_t}. In turn, this yields the desired inequality
for [^i] from Lemma <ref>.
Fix a , b > 0 with a + b ≤ 1.
Let {Z_t} be the random walk
on {-k, …, k} that increments and decrements
at each step with probabilities a and b respectively.
Let Z_0 = 0, and let denote the
first time Z_t ∈{-k, k}.
Then
[ ]
≤ min{k/|a-b|, k^2
} when a ≠ b
k^2
when a=b .
We start with the case where a ≠ b, and we assume
without loss of generality that a > b.
We apply a standard martingale argument used to
derive expected stopping times for
gambler's ruin-type random walks <cit.>.
For completeness, we provide the full argument,
and to start we define the two processes:
U_t = Z_t - (a-b)t .
and
M_t = (b/a)^Z_t .
We can verify that both {U_t} and {M_t} are martingales
with respect to {Z_t} by computing:
[U_t+1 | {Z_t}]
=
a (Z_t + 1) + b(Z_t - 1) + (1-a-b)Z_t - (a-b)(t+1)
=
Z_t(a + b + (1-a-b)) + (a-b)(1 - (t+1))
=
Z_t + (a-b) t = U_t ,
and
[M_t+1 | {Z_t}]
=
a(b/a)^Z_t + 1 +
b(b/a)^Z_t - 1 +
(1-a-b)(b/a)^Z_t
=
a(b/a)(b/a)^Z_t
+ b (a/b)(b/a)^Z_t
+ (1-a-b)(b/a)^Z_t
=
(b/a)^Z_t·
(b + a + (1-a-b))
=
(b/a)^Z_t =
M_t .
Now observe that is a stopping
time for {Z_t}, and that [] is finite
given that the probability that Z_t is absorbed
at k in the next 2k steps is at least a^2k.
Moreover, the increments of both {U_t}
and {M_t} are bounded by absolute constants.
Thus we can apply the Optional Stopping Theorem
<cit.> to
both processes.
For the latter process, letting p^+ denote the
probability that Z_ = k and letting
λ := a/b > 1, this implies that
[ M_]
=
p^+ (1/λ)^k + (1-p^+)(1/λ)^-k = [Z_0 ]
=
(1/λ)^0
= 1
.
Solving for p^+ and rearranging, we find
p^+
= 1- (1/λ)^-k/(1/λ)^k - (1/λ)^-k = λ^k - 1/λ^k - (1/λ)^k .
Applying the Optional Stopping Theorem
to the former process {U_t} shows
[ U_]
= [Z_ - (a-b)]
= [U_0]
= 0 ,
which implies that [] = [Z_] / (a-b).
Then using the fact that
[ Z_]
=
p^+ k - (1-p^+)k
=
k (2 p^+ - 1)
and the expression for p^+ from (<ref>),
we conclude that
[Z_]
= k · (2 p^+ - 1)/a-b = k/a-b·(
2(λ^k -1 )/λ^k - (1/λ)^k
- 1
)
.
In general, 2p^+ - 1 ≤ 1, which means
[Z_] ≤k/a-b.
On the other hand, this yields a loose bound
when |a-b| = o(1/k).
In this case, it can be verified that
the right hand side of (<ref>)
is bounded by k^2 (for all k ≥ 2)
for any a > b. Thus in this regime, we conclude that
[^i] ≤min{k/a-b, k^2 },
and it follows by symmetry that
[^i] = min{k/b-a, k^2 },
when b > a.
For the case when a=b, we can apply a similar
argument to the martingale {G_t} where
G_t = (Z_t)^2 - t (i.e., the standard
analysis for an unbiased gambler's ruin random walk
<cit.>)
to show in this regime that [^i] = k^2,
which concludes the proof.
The proof of Lemma <ref>
then follows from Propositions <ref>
and <ref>,
and together with expression (<ref>)
this implies that
[ ^i ]
≤ min{k/|a-b|, k^2
}· m
when a ≠ b
k^2 · m
when a=b .
Tail bound on Recall that is defined as the coupling
time of the process {(X_t, Y_t)}, and we can write
= max_i ∈ [m]^i .
Using the bound on [^i] from
expression (<ref>),
we now prove the following bound on the tail of
. As a consequence of the
coupling property from expression <ref>,
the statement of Theorem <ref>
will immediately follow.
Fix k, m≥ 2 and a > b > 0 with a + b ≤ 1.
Consider the resulting process {(X_t, Y_t)}
as defined in expression (<ref>),
intialized at (X_0, Y_0) = (x, y) for
x, y ∈{1, …, k}^m.
Then letting
Φ := min{k/|a-b|, k^2 }· m
when a ≠ b
k^2 · m
when a = b ,
it follows that
( > 2 Φ·log(4m) ) ≤1/4.
To start, we prove by induction that for
every i ∈ [m], and c ≥ 1:
(
^i >
c · 2 Φ)
≤ 1/2^c .
Consider the base case when c = 1.
It follows by Markov's inequality that
(
^i >
2 Φ)
≤ [ ^i ]/2 Φ ≤ 1/2 ,
where in the final inequality we
used the bound [^i] ≤Φ
from expression (<ref>).
Now assume that the claim holds for some c - 1 ≥ 1.
Then we can write
(
^i >
c · 2 Φ)
= (
^i >
c · 2 Φ | ^i > (c-1) · 2 Φ)
·(
^i > (c-1) · 2 Φ)
= (
^i > 2 Φ)
·(
^i > (c-1) · 2 Φ)
≤ 1/2·1/2^(c-1) = 1/2^c ,
where the equality in the second line follows
from the independence of each step
in the coupling, and where the final inequality
comes from applying the bound from
expression (<ref>)
and by the inductive hypothesis.
Thus we conclude that expression (<ref>)
holds in general for all c ≥ 1.
It follows by a union bound that
(
> c · 2 Φ)
= (
max_i ∈ [m] ^i > c · 2 Φ)
≤ ∑_i ∈ [m](
^i > c · 2 Φ)
≤ m/2^c .
Then setting c := log (4m) implies
( > 2 Φ·log (4m) ) ≤1/4,
which concludes the proof.
Given that the bound in Lemma <ref>
is uniform over all x, y ∈{1, …, k}, it follows
from expression (<ref>) that
d( 2 Φ·log (4m))
≤ 1/4 ,
where we use the definition of Φ from the lemma.
By definition, this means
= O( Φ·log m ),
which concludes the proof of Theorem <ref>.
§ DETAILS ON THE AVERAGE STATIONARY GENEROSITY OF K-IGT DYNAMICS
In this section, we provide the
proof of Proposition <ref>,
which derives the average generosity parameter value
of k-IGT dynamics under its stationary distribution.
We restate the proposition for convenience.
*
Fix k ≥ 2, and recall from Definition <ref>
that the k-IGT dynamics uses the set of generosity
parameters ^k = {g_1,…, g_k},
where each g_j = j-1/k-1·.
Then letting λ := (1-β)/β,
Theorem <ref> implies
(using properties of multinomial distributions) that
[ π_j ]
=
m ·λ^j-1/∑_i ∈ [k]λ^i-1
for all coordinates j ∈ [k].
It follows that the average stationary generosity _k
is given by
_k
= 1/m∑_j∈ [k]
g_j ·[π_j]
= ∑_j∈ [k]·(j-1/k-1)
·λ^j-1/∑_i ∈ [k]λ^i-1 .
Now observe that when β = 1/2 and
λ = 1, we have
λ^j-1/∑_i∈[k]λ^i-1 = 1/k
for all j ∈ [k], and thus it follows that _k = /2.
For the case when β≠ 1/2, we can simplify and write:
_k
= ∑_j∈ [k]·(j-1/k-1)
·λ^j-1/∑_i ∈ [k]λ^i-1 = ( ĝ/(k-1)∑_i=1^k λ^i-1)
·∑_j=1^k-1
j λ^j
= ( ĝ/(k-1)∑_i=1^k λ^i-1)
·∑_j=1^k-1(
∑_i=1^j λ^j
)
= ( ĝ/(k-1)∑_i=1^k λ^i-1)
·∑_i=1^k-1(
∑_j=i^k-1λ^j
)
= ( ĝ/(k-1)∑_i=1^k λ^i-1)
·∑_j=1^k-1λ^j λ^k-j - 1/λ - 1
= ( ĝ/(k-1)(λ - 1)∑_i=1^k λ^i-1)
·∑_j=1^k-1 (λ^k - λ^j)
= ( λĝ/(k-1)(λ - 1)∑_i=1^k λ^i-1)
·(
λ^k-1(k-1) - λ^k-1-1/λ - 1) .
Then using the fact that
∑_i∈[k]λ^i-1 = λ^k-1/λ-1,
we can further simplify and write
_k
= ·(
λ/(k-1)(λ^k - 1))
(
λ^k-1 (k-1) - λ^k-1-1/λ - 1)
= ·(
λ^k/λ^k - 1 -
(1/k-1)
(λ/λ-1)
(λ^k-1-1/λ^k -1)
) ,
which concludes the proof.
§ DETAILS ON CHARACTERIZING THE OPTIMALITY OF
K-IGT DYNAMICS
§.§ Computing expected payoffs for nodes
Recall from Section <ref>
that for two strategies _1, _2 ∈{, , },
we define f(_1 ; _2) as the expected payoff of a
node playing strategy _1 against an opponent
playing strategy _2 during a single RPD game.
We proceed to make this definition precise:
§.§.§ Defining the expected payoff function
We first recall the necessary components which
were introduced in Sections <ref>:
* RPD games have four game states
= (CC, CD, DC, DD).
Each state is an ordered pair
specifying the action of a row player
with strategy _1 and a column player
with straetegy _2 during a given round.
* In RPD games, a strategy is comprised of an initial distribution
over the actions (C, D), and a (randomized)
transition rule for choosing an action in
the next round (conditioned on playing an additional
round with probability δ).
* For the pair of strategies (_1, _2),
let be the row-stochastic matrix specifying
the transition probabilities over
conditioned on playing an additional round of the game.
We assume that the rows and columns of are indexed
by the four states in , and thus
_1, 1 denotes the (conditional) probability
of transitioning from state CC to CC,
_1, 2 denotes the (conditional) probability
of transitioning from state CC to CD, etc.
* Let _1 denote the initial
distribution over the game states
determined by the initial action distributions
of the two strategies _1 and _2.
For i ≥ 2,
let _i ∈Δ_4 denote the distribution
over game states
conditioned on (i) having already played i-1
rounds of the game, and on (ii) playing
an additional round with probability δ.
It follows that
_2 = _1
and _i = _i-1 = _1 ^i-1 for all i ≥ 3 .
* Given the single-round PD payoff matrix
C D
C R S
D T P where T > R > P > S ,
let :̌= [R, S, T, P]^⊤ be the vector of
single-round (row player) payoffs.
Then in the repeated PD game setting with restart probability δ,
the pair of strategies (_1, _2) specify a Markov
chain {_i} over the joint space
:= (, ),
where denotes the termination state of
the repeated game (that is reached with probability
(1-δ) at the end of each round).
In particular, using the components above,
we can define each _i ∈Δ_5 by:
_1
= [ _1 0 ]
_2
= [ δ_2 (1-δ) ]
= [ δ_1 (1-δ) ]
and _i
= [ δ^i-1_i-1 (1-δ^i-1) ]
= [ _1 (δ)^i-1 (1-δ^i-1) ]
for all i ≥ 3 .
Now let := [ 0 ] denote the
vector of single-round (row player) payoffs over the
repeated game states
(where players receive a payoff of 0 when the
game terminates and enters state ).
For all i ≥ 1, we define
r_i as the payoff of the row player
(with strategy _1) in round i.
Then given the pair of strategies (_1, S_2)
the expected reward at round i ≥ 1
(over the randomness of the
strategies and the repeated game probability)
is given by
[r_i]
= ⟨, _i
⟩ = ⟨,̌δ^i-1_i-1⟩ = ⟨,̌_1 (δ)^i-1⟩ ,
where in the second equality we use
the fact that that last component of is 0
and the definition of _i
from expression (<ref>).
Then we can formally define the expected payoff _1_2
for the node with strategy _1 over the entire RPD game as:
_1_2 := ∑_i = 1^∞ [r_i]
= ∑_i = 1^∞ ⟨,̌_1 (δ)^i-1⟩ = ⟨,̌_1 (∑_i=1^∞ (δ)^i-1)
⟩ .
Using this definition, in the following subsections we
derive the expected payoffs for nodes
against each of the strategy types , ,
and .
In each case, we define the matrix
and initial distribution _1 specified by the
pair of strategies, and we then
compute the value of f using expression (<ref>).
We summarize the expected payoff expressions
in Section <ref>.
§.§.§ Expected payoff for against
For a fixed generosity parameter g,
we compute g.
Recall that in the first round of an RPD game,
nodes with strategy g play C with
probability s_1, and D with probability (1-s_0).
It follows that the initial distribution _0
over the game states = (CC, CD, DC, DD)
for the strategy pair (g, ) is given by
_1 = [s_1, 0, (1-s_1), 0]^⊤ .
Moreover, the transition matrix over
(conditioned on playing an additional round) is specified by
= [ 1 0 0 0; g 0 (1-g) 0; 1 0 0 0; g 0 (1-g) 0 ] .
which comes from the fact that the node with strategy
plays C at each round.
Recalling that =̌ [R, S, T, P]^⊤, it follows that
_1 (δ)^i-1 = [ δ^i-1, 0, 0, 0 ]^⊤
and thus ⟨,̌_1 (δ)^i-1⟩ =
R ·δ^i-1
for all i ≥ 2.
Then using the definition of f
in expression (<ref>),
we have
g = ∑_i=1^∞⟨,̌_1 (δ)^i-1⟩
= ⟨,̌_1⟩
+
∑_i=2^∞⟨,̌_1 (δ)^i-1⟩
=
s_1 R + (1-s_1)T + δ· R/1-δ =
(1-s_1)(T-R) + R/1-δ ,
where in the penultimate inequality we use the
fact that ∑_i=2^∞δ^i-1 = δ/(1-δ).
§.§.§ Expected payoff for against
Using the same approach as in the previous section,
we now compute g
for a fixed generosity parameter g.
Given that nodes with strategy play D
at all rounds i ≥ 1, we have
_1 = [0, s_1, 0, (1-s_1)]^⊤ .
The transition matrix over
(conditioned on playing an additional round) in this
case is specified by
= [ 0 1 0 0; 0 g 0 (1-g); 0 1 0 0; 0 g 0 (1-g) ] .
which again follows directly from the definitions
of the two strategies.
Then for all i ≥ 2, we have
_1 (δ)^i-1 = δ^i-1· [ 0, g, 0, (1-g) ]^⊤
and ⟨,̌_1 (δ)^i-1⟩ = δ^i-1·(gS + (1-g)P )
.
Again using the definition of f
from expression (<ref>), we can compute
g = ∑_i=1^∞⟨,̌_1 (δ)^i-1⟩
=
s_1 S + (1-s_1)P +
(g(S-P) + P) ·δ/1-δ .
§.§.§ Expected payoff for against
Given two generosity parameters g and g',
we now derive gg',
which requires more work compared to the previous
two cases. To start, observe that
_1 = [ s_1^2, s_1(1-s_1), s_1(1-s_1), (1-s_1)^2 ]^⊤ .
which follows from the assumption that
nodes any strategy play the same
initial action parameter s_1.
Then by definition of the two strategies,
the transition matrix over
(conditioned on playing an additional round) is defined by
= [ 1 0 0 0; g 0 (1-g) 0; g' 1-g' 0 0; g g' (1-g')g g'(1-g) (1-g)(1-g') ] .
Now recall from the definition of f in
expression <ref> that we can write
gg' = ⟨,̌_1 (∑_i=1^∞ (δ)^i-1)
⟩ .
Recall for that for a matrix whose
eigenvalues are uniformly bounded by 1 in
absolute value, we have the identity
∑_i=1^∞^i-1 =
(-)^-1 .
It follows that
∑_i=1^∞
(δ)^i-1 =
( - δ)^-1 ,
and we let := (-δ)^i-1
with entries a_ij for i, j ∈ [4].
One can verify that the entries of are
given by the following:
_1, 1
=
1/1 - δ _2, 1 =
-δ^2 g g' + δ^2 g' + δ g/(1 - δ)(1 - δ^2(1 - g)(1 - g'))
_1, 2 = 0
_2, 2 = 1/1 - δ^2 (1 - g)(1 - g')
_1, 3 = 0
_2, 3 = δ - δ g/1 - δ^2 (1 - g)(1 - g')
_1, 4 = 0
_2, 4 = 0
_3, 1
=
-δ^2 g g' + δ^2 g + δ g'/(1 - δ)(1 - δ^2(1 - g)(1 - g')) _4, 1 = δ^2(g g' (δ (1-g)(1-g') + 1) + g^2(1 - g') + g'^2 (1 - g))/
(1-δ)(1 - δ(1-g)(1-g'))(1 - δ^2(1-g)(1-g'))
_3, 2 = δ - δ g'/1 - δ^2 (1 - g)(1 - g') _4, 2 = δ(δ g'(1-g)(1-g') + g(1-g'))/(1 - δ(1-g)(1-g'))(1 - δ^2(1-g)(1-g'))
_3, 3 = 1/1 - δ^2 (1 - g)(1 - g') _4, 3 = δ(δ g(1-g)(1-g') + g'(1-g))/(1 - δ(1-g)(1-g'))(1 - δ^2(1-g)(1-g'))
_3, 4 = 0
_4, 4 = 1/1 - δ (1 - g)(1 - g') .
Then by the definition in expression (<ref>)
and using the matrix , we can write
gg' = ⟨,̌_1 ⟩ = ⟨,̌_1 ⟩ .
Recalling that =̌ [R, S, T, P]^⊤,
and using the definition of _1 from
expression (<ref>) and the
entries of above, it is straightforward to verify that
gg' =
s_1(T + s_1(R - T)) + (1 - s_1)(P + s_1(S - P))
- (1 - s_1)(R - T) δ^2(1 - g)(1 - g') + δ (1 - g)/1 - δ^2(1 - g)(1 - g')
- (1 - s_1)(R - S) δ^2(1 - g)(1 - g') + δ(1 - g')/1 - δ^2(1 - g)(1 - g')
+ (1 - s_1)^2(R - S - T + P)
δ(1 - g)(1 - g')(1 + δ (1 - g)(1 - g'))/
1 - δ^2(1 - g)^2(1 - g')^2
+ Rδ/1 - δ .
§.§.§ Summary of expected payoffs
We summarize the expected payoffs for nodes with
strategy g derived in the previous
three sections. We state these payoffs
for general single-round PD payoff matrices
(where =̌ [R, S, T, P]^⊤), and
for the special donation game instance
(where =̌ [b-c, -c, b, 0]^⊤ for b > c ≥ 0)
with an initial cooperation probability s_1 = 1/2.
General RPD payoffs
Summarizing expressions (<ref>),
(<ref>), and (<ref>), we have:
g =
(1-s_1)(T-R) + R/1-δ
g =
s_1 S + (1-s_1)P +
(g(S-P) + P) ·δ/1-δ .
gg' =
s_1(T + s_1(R - T)) + (1 - s_1)(P + s_1(S - P))
- (1 - s_1)(R - T) δ^2(1 - g)(1 - g') + δ (1 - g)/1 - δ^2(1 - g)(1 - g')
- (1 - s_1)(R - S) δ^2(1 - g)(1 - g') + δ(1 - g')/1 - δ^2(1 - g)(1 - g')
+ (1 - s_1)^2(R - S - T + P)
δ(1 - g)(1 - g')(1 + δ (1 - g)(1 - g'))/
1 - δ^2(1 - g)^2(1 - g')^2
+ Rδ/1 - δ .
Repeated donation game payoffs
In this setting, we assume
=̌ [b-c, -c, b, 0]^⊤ for b > c ≥ 0,
and that s_1 = 1/2.
Then expressions(<ref>),
(<ref>), and (<ref>)
simplify to:
g = c/2 + b-c/1-δ
g =
- c ·(1/2 + δ g/1-δ)
gg' = b-c/1-δ
+ c δ (1 - g) - b δ (1 - g') + c - b/
2 · (1 - δ^2(1 - g)(1 - g')) .
§.§ Proof of Proposition <ref>
Using the calculations from Section <ref>
for the expected payoffs of nodes, we can now prove
Proposition <ref>, which is restated for convenience:
*
Statements (ii) and (iii) of follow directly from the definitions of
g and g
in expressions (<ref>)
and (<ref>), respectively.
For the former, observe that
expression (<ref>) has no dependence on g,
which proves statement (ii).
For the latter, observe that (<ref>)
is decreasing with g,
which proves statement (iii).
For statement (i), fix g”∈ [0, g], and consider
the definition of
gg” from
expression (<ref>).
Differentiating this expresion with respect to g then finds
d/d g gg” = - (1 - s_1)(R - T) -δ^2(1 - g”) - δ/
(1 - δ^2(1 - g”)(1 - g))^2
- (1 - s_1)(R - S) -δ^2(1-g”) -δ^3(1-g”)^2/
(1 - δ^2(1 - g)(1 - g”))^2
+ (1 - s_1)^2(R - S - T + P) -δ(1 - g”)/
(1 - δ(1 - g)(1 - g”))^2 .
Then it is straightforward to check that when
R+P ≤ T+S, when δ≥T-R/R-S and
g < 1 - T-R/δ (R-S),
and when s_1 ∈ (0, 1],
(all of which are assumptions of the proposition),
then expression (<ref>) is
strictly positive for all g, g”∈ [0, g].
This implies under the mentioned constraints
that for any g, g”∈ [0, g], the
function gg”
is strictly increasing in g, which proves statement (i).
§.§ Proof of Theorem <ref>
Recall from Section <ref>
that we define the
mean-field expected payoff
F(g, α, β) by
F(g, α, β)
:= α·g
+ β·g
+ mn·gg ,
where for ∈{, , } we write
g as shorthand for
g.
In this section, we develop the proof of
Theorem <ref>, which
is restated here:
*
Intuitively, the theorem shows that
under the stationary distribution of
the k-IGT dynamics, the average generosity value
of nodes converges to
the optimal generosity parameter
(i.e., the parameter maximizing F(g, α, β))
at a rate of roughly O(1/k), so long
as the fraction of nodes playing
(relative to the number of nodes playing )
is sufficiently small.
The proof of Theorem <ref>
is developed in two parts:
first, using the calculations for the expected g payoffs
f_g(·) from Section <ref>,
we can compute the value g^⋆
that maximizes F(g, α, β) from
expression (<ref>).
In particular, we characterize this
optimal generosity value with respect to various
regimes of ϕ := β n/m, which
is the ratio between the number of
nodes playing and in the population.
Then, given the result of Proposition <ref>,
which shows how the average stationary generosity g
of the k-IGT dynamics scales with k,
we can characterize the range of ϕ
for which → g^⋆
and obtain a quantitative convergence rate with
respect to k.
We proceed to formalize these two steps needed
to prove Theorem <ref>.
Note that as a consequence of the first step of our approach,
we also characterize the regimes of ϕ under which the
average stationary generosity of our dynamics
does not converge to g^⋆.
We leave as an open question whether
there exists a dynamics for this setting
such that its average stationary generostiy
value converges to g^⋆ under
all regimes of ϕ, and if so, at what rate.
§.§.§ Finding the generosity parameter
maximizing F(g, α, β)
Recall that Theorem <ref> is
restricted to the donation game payoff matrix setting
and with initial cooperation probability s_1 = 1/2.
Thus we can use expressions (<ref>),
(<ref>),
and (<ref>)
from Section <ref> to write
F(g, α, β)
= α·g
+ β·g
+ mn·gg
= α·(
c/2 + b-c/1-δ)
-
β c ·(
1/2 + δ g/1-δ)
+
m/n·(
b-c/1-δ
- (b-c)(1+δ (1-g))/
2(1-δ^2 (1-g)^2)) .
Then the first and second derivatives of
F with respect to g are given by
d/d g
F(g, α, β)
= -β c δ/(1-δ)
+m · (b-c) δ/
2n(1-δ(1-g))^2
d^2/d g^2
F(g, α, β)
=
-δ^2(b-c) m/n(1-δ(1-g))^3 .
Now observe that for any g ∈ [0, ],
the second derivative of F(g, α, β) with respect to g
is always negative, and thus F is concave
over the domain g ∈ [0, ].
Moreover, the first derivative of F(g, α, β) with respect
to g has a root at
:= √(m (1-δ) (b-c)/2 β n c δ^2)
- (1-δ)/δ .
Thus we can characterize the function maximizer
g^⋆ := _g ∈ [0, ] F(g, α, β)
in the three cases. For this, recall from the theorem
statement that ϕ = β n/m. Then:
* When
ϕ ≤ (b-c)(1-δ)/2c(1-δ(1-))^2 ,
then d F/d g≥ 0 for all g ∈ [0, ].
Thus F is non-decreasing over this domain,
and g^⋆ =.
* When
ϕ≥b-c/2c (1-δ) ,
then d F/d g≤ 0 for all g ∈ [0, ].
Thus F is non-increasing over this domain,
and g^⋆ = 0.
* Otherwise, when
b-c/2c (1-δ) < ϕ < (b-c)(1-δ)/2c(1-δ(1-))^2 ,
then 0 < <, and thus
g^⋆ =.
§.§.§ Bounding the rate |g^⋆ - _k| for small ϕ
When ϕ is bounded according to
expression (<ref>), then the arguments
of the preceding section show
g^⋆ = g.
On the other hand, recall from Proposition <ref>
that the average stationary generosity _k
of the k-IGT dynamics in this regime is given by
_k
= ·(
λ^k/λ^k - 1 -
(1/k-1)
(λ/λ-1)
(λ^k-1-1/λ^k -1)
) ,
where λ := (1-β)/β > 1
by assumption of the theorem statement.
Since _k is an average over values between [0, ],
it follows that _k ≤. Moreover, we can further
bound _k from expression (<ref>) by
_k
≥ ·(
1 -
(1/k-1)
(λ/λ - 1)
(λ^k-1/λ^k)
)
= ·(
1 -
1/(λ - 1)(k-1)) ,
which holds given that λ > 1.
It then follows that for this regime of ϕ,
| g^⋆ - _k |
= | - _k |
≤ ·(
1/(λ - 1)(k-1))
≤ β/(1-2β)(k-1) ,
where the last inequality comes from the
fact that ≤ 1 and λ = (1-β)/β.
This concludes the proof of Theorem <ref>.
§.§.§ Remarks on Theorem <ref>
We make several remarks on this result:
Example settings of the constraint on ϕ
We give an example to illustrate
the constraint on ϕ leading to the O(1/k)
convergence rate from the theorem.
Consider the donation game reward vector =̌ [b-c, -c, b, 0]^⊤ where
b = 3, c=2, and a restart probability δ = 0.9.
Assume that the maximum generosity parameter = 1/4 < 0.7/2.7,
and thus conditions (a) through (c) of Proposition <ref>
are satisfied.
Then the conclusion of Theorem <ref> holds
so long as λ = 1-β/β > 1 and
ϕ = β n/m < 40/169≈ 0.237.
Thus when the fraction of nodes is sufficiently bounded
with respect to the number of nodes,
the convergence guarantee of the theorem statement holds.
Comparison with the more granular expected payoff function
Recall from the discusison in Section <ref>
that our formulation of the expected payoff function F(g, α, β)
in expression (<ref>) considers a mean-field setting,
by assuming all nodes have an identical average
parameter value. On the other hand,
we could imagine re-formulating the expected payoff function F
by assuming a distribution over the space of
generosity parameter count vectors,
and taking an expectation with respect to
over the functions f (i.e., without
assuming that all nodes in the population play
the same parameter value).
However, as mentioned in Section <ref>,
controlling this value analytically is very challenging
due to the complexity of the function gg'
(i.e., as in expression (<ref>))
when the generosity parameters are non-equal.
Nevertheless, we show numerically that, under the stationary
distribution of the k-IGT dynamics,
the difference between the expected payoff function F
(as in expression (<ref>))
and the function value computed in the more granular manner
(as described above) is small, even for small values of m.
In particular, for k=2 and k=6, for a range
of (α, β) population settings, we compute the
expected payoff function exactly
with respect to the stationary distribution
of the k-IGT dynamics from Theorem <ref>.
These values are plotted in Figure <ref>
and compared with the corresponding
values of F from expression (<ref>)
in the mean-field setting.
In both examples, we consider m = 20 nodes,
a donation game reward vector $̌ whereb=3andc=2,
a restart probabilityδ= 0.9,
and a maximum generosity parameter= 1/4.
We notice in both examples that difference
in function values for all combinations of(α, β)is small, and this suggests that our
optimality result in the mean field setting from Theorem <ref>
should likely to translate to the distributional setting
with respect to.
alpha |
http://arxiv.org/abs/2307.03971v1 | 20230708130330 | What is the meaning of proofs? A Fregean distinction in proof-theoretic semantics | [
"Sara Ayhan"
] | cs.LO | [
"cs.LO",
"math.LO",
"03F03 (Primary), 03F07 (Secondary)"
] |
A Fregean distinction in proof-theoretic semantics
Sara Ayhan Institute of Philosophy I, Ruhr University Bochum, Bochum, Germany
[email protected]
What is the meaning of proofs?
Sara AyhanI would like to thank several people for supporting me in improving this paper essentially, among them Luca Tranchini for his thorough feedback and vital input on an earlier version of this paper and also two anonymous referees for their very constructive and helpful reports. I am especially grateful to Heinrich Wansing for the numerous and encouraging occasions to discuss this paper extensively and for his valuable comments.
Received: date / Accepted: date
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This is a post-peer-review, pre-copyedit version of an article published in the Journal of Philosophical Logic.
The final authenticated version will be available online at: DOI: 10.1007/s10992-020-09577-2
The origins of proof-theoretic semantics lie in the question of what constitutes the meaning of the logical connectives and its response: the rules of inference that govern the use of the connective.
However, what if we go a step further and ask about the meaning of a proof as a whole?
In this paper we address this question and lay out a framework to distinguish sense and denotation of proofs.
Two questions are central here.
First of all, if we have two (syntactically) different derivations, does this always lead to a difference, firstly, in sense, and secondly, in denotation?
The other question is about the relation between different kinds of proof systems (here: natural deduction vs. sequent calculi) with respect to this distinction.
Do the different forms of representing a proof necessarily correspond to a difference in how the inferential steps are given?
In our framework it will be possible to identify denotation as well as sense of proofs not only within one proof system but also between different kinds of proof systems.
Thus, we give an account to distinguish a mere syntactic divergence from a divergence in meaning and a divergence in meaning from a divergence of proof objects analogous to Frege's distinction for singular terms and sentences.
§ INTRODUCTION
In proof-theoretic semantics (PTS) the meaning of the logical constants is taken to be given by the rules of inference that govern their use.
As a proof is constituted by applications of rules of inference, it seems reasonable to ask what the meaning of proofs as a whole would consist of on this account.
What we are particularly interested in is a Fregean distinction between sense and denotation in the context of proofs.[We assume at least a basic familiarity with this idea, laid out in Frege's famous paper “Über Sinn und Bedeutung”, cf. <cit.> for an English translation.]
This account builds up on <cit.>, where such a distinction is proposed and used in a proof-theoretic explanation of paradoxes.
The notion of denotation is nothing new in the context of proofs.
It is common in the literature on proof theory and PTS (e.g. <cit.>, <cit.>, <cit.>) to distinguish between derivations, as linguistic objects, and proofs, as abstract (in the intuitionistic tradition: mental) entities.
Proofs are then said to be represented or denoted by derivations, i.e. the abstract proof object is the denotation of a derivation.
The notion of sense, on the other hand, has been more or less neglected.
Tranchini <cit.>, therefore, made a proposal that for a derivation to have sense means to be made up of applications of correct inference rules.
While this is an interesting approach to consider, Tranchini only determines whether a proof has sense or not but does not go further into what the sense of a proof exactly consists of, so there might be further questions worth pursuing.
We will spell out an account of a distinction between sense and denotation of proofs, which can be considered a full-fledged analogy to Frege's distinction concerning singular terms and sentences.[There is some literature also in the field of proof theory concerned with this Fregean distinction, however, to our knowledge, apart from <cit.> this is not concerned with the sense of derivations but with the sense of sentences: cf. P. Martin-Löf (2001). The Sense/Reference Distinction in Constructive Semantics. Transcription of a lecture given at a conference on Frege organised by G. Sundholm at Leiden, 25 August 2001, transcription by B. Jespersen, 9 August 2002: https://www.academia.edu/25695205/The_Sense_Reference_Distinction_in_Constructive_Semantics, or <cit.>.]
Another question concerns the relation of different kinds of proof systems (intuitionistic natural deduction (ND) and sequent calculus (SC) systems will be considered) with respect to such a distinction.
If we have two syntactically different derivations with the same denotation in different proof systems, do they always also differ in sense or can sense be shared over different systems?
§ CONNECTING STRUCTURE AND MEANING
The basic point of departure is the simple observation that there can be different ways leading from the same premises to the same conclusion, either in different proof systems or also within one system.
The focus in this matter so far has been on normal vs. non-normal derivations in ND and correspondingly on derivations containing cut vs. cut-free derivations in SC.
However, there can also simply be a change of the order of rule applications that can lead to syntactically different derivations from the same premises to the same conclusion.
Does this lead to a different denotation or should we say that it is only the sense that differs in such cases, while the underlying proof stays the same?
§.§ Normal form and the denotation of derivations
One and the same proof may be linguistically represented by different derivations.
We will follow the general opinion in taking proofs to be the denotation - the semantic value - of (valid) derivations.
In ND a derivation in normal form is the most direct form of representation of its denotation, i.e. the represented proof object.
For our purposes we will consider a derivation to be in normal form iff neither β- nor η-conversions (cf. rules below) can be applied to it.
A derivation in normal form in ND corresponds to a derivation in cut-free form in SC.
In intuitionistic logic derivations in non-normal form in ND (resp. with cut in SC) can be reduced to ones in normal form (resp. cut-free form).
These are then thought to represent the same underlying proof, just one more indirectly than the other, because, as Prawitz <cit.> says, they represent the same idea this proof is based on.
In order to make sense and denotation transparent, our approach will be to encode the derivations with λ-terms.
As is well known, by the Curry-Howard-isomorphism there is a correspondence between the intuitionistic ND calculus and the simply typed λ-calculus and we can formulate the following ND-rules annotated with λ-terms together with the usual β- and η-conversions for the terms.
The β-conversions correspond to the well-known reduction procedures, which can be formulated for every connective in ND <cit.>, while the η-conversions are usually taken to correspond to proof expansions <cit.>.
We use p, q, r,... for arbitrary atomic formulas, A, B, C,... for arbitrary formulas, and Γ, Δ,... for sets of formulas.
Γ, A stands for Γ∪{A}.
For variables in terms x, y, z,... is used and r, s, t,... for arbitrary terms.
Term-annotated ND-rules:
[⊃I]λx.t:A ⊃B*t:BΓ,[x:A]
[⊃E]App(s, t):B*s: A ⊃BΓ *t:AΔ
[∧I]⟨s, t⟩: A ∧B*s:AΓ *t:BΔ
[∧E_1]fst(t):A*t:A ∧BΓ
[∧E_2]snd(t):B*t:A ∧BΓ
[∨I_1]s:A ∨B*s:AΓ
[∨I_2]s:A ∨B*s:BΓ
[∨E] r {x.s | y.t}:C *r: A ∨BΓ *s:CΔ, [x:A] *t: CΘ, [y:B]
[E]abort(t):A*t:Γ
β-conversions:
App(λx.t, s)
⇝t[s/x]
2
fst(⟨s, t ⟩)
⇝s
snd(⟨s, t ⟩)
⇝t
2
r {x.s | y.t}
⇝s[r/x]
r {x.s | y.t}
⇝t[r/y]
η-conversions:
λ x.App(t, x) ⇝ t (if x not free in t)
⟨fst(t), snd(t) ⟩⇝t
r {t.t | s.s}
⇝r
We read x : A as “x is a proof of A".
t[t'/x] means that in term t every free occurrence of x is substituted with t'.
The usual capture-avoiding requirements for variable substitution are to be observed and α-equivalence of terms is assumed.
A term that cannot be converted by either β- or η-conversion is in normal form.
Since there is a correspondence between intuitionistic SC and intuitionistic ND, for every derivation in ND there must be a derivation in SC named by the same λ-term.
This correspondence is of course not one-to-one, but many-to-one, i.e. for each proof in ND there are at least potentially different derivations in SC.[On the complications of such a correspondence and also on giving a term-annotated version of SC cf. e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. Term-annotated sequent calculi can be found i.a. in <cit.> or <cit.>, from which our presentation is only a notational variant.]
The following are our respective SC-rules, where we use the propositional fragment of an intuitionistic SC with independent contexts <cit.>.
The reduction procedures remain the same as above in ND; β-reduction corresponds to the procedures needed to establish cut-elimination, while η-conversion corresponds to what may be called “identicals-elimination" <cit.> or “identity atomization" <cit.>[Showing that it is possible to get rid of axiomatic sequents with complex formulas and derive them from atomic axiomatic sequents. This is also part of cut-elimination but in principle those are separate procedures <cit.>.]:
Term-annotated G0ip:
Logical axiom:
[Rf]x : A ⊢x : A
Logical rules:
[∧R]Γ, Δ⊢⟨s, t⟩: A ∧BΓ⊢s: A Δ⊢t: B
[∧L]Γ, z: A ∧B ⊢s[[fst(z)/x]snd(z)/y] : CΓ, x: A, y : B ⊢s : C
[∨R_1]Γ⊢s :A ∨BΓ⊢s:A
[∨R_2]Γ⊢s:A ∨BΓ⊢s:B
[∨L]Γ, Δ, z:A ∨B ⊢ {x.s | y.t} : CΓ, x:A ⊢s:C Δ, y:B ⊢t:C
[⊃R]Γ⊢λx.t:A ⊃BΓ, x:A ⊢t:B
[⊃L]Γ, Δ, x:A ⊃B ⊢s[App(x, t)/y]:CΓ⊢t: A Δ, y:B ⊢s:C
[L]x: ⊢abort(x): C
Structural rules:
Weakening:
[W]Γ, x:A ⊢t:CΓ⊢t:C
Contraction:
[C]Γ, x : A ⊢t[x/y] : CΓ, x : A, y : A ⊢t : C
The rule of cut
[cut]Γ, Δ⊢s[t/x] : CΓ⊢t : D Δ, x : D ⊢s : C
is admissible in G0ip.
In the left operational rules as well as in the weakening rule we have the case that variables occur beneath the line that are not explicitly mentioned above the line.
In these cases the variables must be either fresh or - together with the same type assignment - already occurring in the context Γ, Δ, etc.
Same variables can only (but need not) be chosen for the same type, i.e., if a new type occurs in a proof, then a fresh variable must be chosen.
If we would allow to chose the same variable for different types, i.e. for example to let x:A and x:B occur in the same derivation this would amount to assuming that arbitrarily different formulas have the same proof, which is not desirable.
§.§ Identity of proofs and equivalence of derivations
Figuring prominently in the literature on identity of proofs is a conjecture by Prawitz <cit.> that two derivations represent the same proof iff they are equivalent.[Prawitz gives credit for this conjecture to Martin-Löf. Cf. also Martin-Löf <cit.> on this issue, in his terminology “definitional equality".]
This shifts the question of course to asking when two derivations can be considered equivalent.
Using the equational theory of the λ-calculus is one way to provide an answer here: terms on the right and the left hand side of the β- and η-conversions are considered denotationally equal <cit.>.
Hence, two derivations can be considered equivalent iff they are β-η-equal (cf. <cit.>, <cit.>, <cit.>).[There is some discussion about whether η-conversions are indeed identity-preserving. Martin-Löf <cit.> does not think so, for example. Prawitz <cit.> is not clearly decided but writes in the context of identity of proofs it would seem “unlikely that any interesting property of proofs is sensitive to differences created by an expansion". Widebäck <cit.>, relating to results in the literature on the typed λ-calculus like <cit.> and <cit.>, argues for β-η-equality to give the right account of identity of proofs and Girard <cit.> does the same, although he mentions, too, that η-equations “have never been given adequate status" compared to the β-equations.]
The denotation is then seen to be referred to by the term that annotates the formula or sequent to be proven.
We will call this the `end-term' henceforth so that we can cover and compare both ND and SC at once.
So if we have two derivations with essentially different end-terms (in the sense that they are not belonging to the same equivalence class induced by β-η-conversion), we would say that they denote essentially different proofs.
On the other hand, for two ND-derivations, where one reduces to the other (or both reduce to the same), e.g. via normalization, we have corresponding λ-terms, one β-reducible to the other (or both β-reducible to the same term).
In this case we would say that they refer to the same proof.
Prawitz <cit.> stresses that this seems evident since two derivations reducing to identical normal derivations must be seen as equivalent.
Note that we can also have the case that two derivations of the same formula, which would look identical in a non-term-annotated version, here for example of ND, are distinguished on the grounds of our term annotation, like the following two derivations:
2
ND1p ⊃ (p ⊃ (p ∧ p))
ND2p ⊃ (p ⊃ (p ∧ p))
[⊃I^2]λy.λx.⟨x, y ⟩: p ⊃(p ⊃(p ∧p))
[⊃I^1]λx.⟨x, y ⟩: p ⊃(p ∧p)
[∧I]⟨x, y ⟩: p ∧p[x : p]^1 [y : p]^2
[⊃I^2]λx.λy.⟨x, y ⟩: p ⊃(p ⊃(p ∧p))
[⊃I^1]λy.⟨x, y ⟩: p ⊃(p ∧p)
[∧I]⟨x, y ⟩: p ∧p[x : p]^2 [y : p]^1
The reason for this is that it is possible to generalize these derivations in different directions, which is made explicit by the variables.
Hence, the first one can be generalized to a derivation of B ⊃ (A ⊃ (A ∧ B)), while the second one generalizes to A ⊃ (B ⊃ (A ∧ B)).[For a more detailed examination of generalization cf. <cit.> or <cit.>.]
So, encoding derivations with λ-terms seems like a suitable method to clarify the underlying structure of proofs.
There is one kind of conversion left, though, that needs consideration, namely what we will call permutative conversions, or also γ-conversions.[It goes under various other names, as well, like permutation/permuting conversions or commuting/commutative conversions. Some also prefer “reductions" but we will go with the - to us seemingly - more neutral “conversions". The term γ-conversions appears in <cit.>. Cf. about these conversions in general e.g. <cit.>: 251-259, <cit.>: Ch. 10, <cit.>, <cit.>.]
They become relevant here because we have disjunction as part of our logical vocabulary.
Prawitz <cit.> was the first to introduce these conversions.
In the conjunction-implication-fragment of intuitionistic propositional logic derivations in normal form satisfy the subformula property, i.e. in a normal derivation 𝒟 of A from Γ each formula is either a subformula of A or of some formula in Γ.
However, with the disjunction elimination rule this property is messed up, since we get to derive a formula C from A ∨ B which is not necessarily related to A or B.
That is why, in order to recover the subformula property, permutation conversions are introduced, which can be presented in their most general form in the following way:
D[∨E]C *A ∨BΓ *CΔ, A *CΘ, B
⇝
[∨E]D *A ∨BΓ D*CΔ, A D*CΘ, B
Whether or not these are supposed to be taken into the same league as β- and η-conversions in matters of identity preservation of proofs is an even bigger dispute than the one mentioned concerning η-conversions.
Prawitz <cit.> says that while there can be no doubt about the `proper reductions' having no influence on the identity of the proof, “[t]here may be some doubts concerning the permutative ∨E-[...]reductions in this connection" but does not go into that matter any further.
Since he needs these reductions to prove his normalization theorem, it seems that he would be inclined not to have too many doubts about identity preservation under the permutative conversions.
Girard <cit.>, on the other hand, does not seem to be convinced, as he says - considering an example of permutation conversion - that we are forced to identify “a priori different deductions" in these cases.
Even though he accepts these conversions for technical reasons, he does not seem to be willing to really identify the underlying proof objects.
Restall[Restall, G. (2017). Proof Terms for Classical Derivations. Article in progress: https://consequently.org/papers/proof-terms.pdf], however, analyzing derivations by assigning to them what he calls “proof terms" rather than λ-terms, considers the derivations above as merely distinct in representation but not in the underlying proof, which on his account is the same for both.
What is more, he does so not only for technical but rather philosophical reasons, since he claims the flow of information from premises to conclusion to be essentially the same.
Lindley <cit.> and Tranchini <cit.> both make a point about the connection between reductions and expansions (although they speak of certain kinds of “generalized" expansions) on the one hand and (“generalized") permutative conversions on the other, claiming that performing a (generalized) expansion on the left hand side of the conversion above followed by a reduction (and possibly α-conversion) just yields the right hand side.
To conclude, if we only consider the ⊃-∧-fragment of intuitionistic propositional logic, β-η-equality is enough, but if we consider a richer vocabulary, it seems to us at least that there are substantial reasons to include permutative conversions in our equational theory.[The consequence for this paper would be of course to add “γ-conversions" to the list of relevant conversions in our definitions about normal forms, identity of denotation, etc.]
We do not aim to make a final judgment on this issue here.
Rather, when we have laid out our distinction about sense and denotation of proofs below, we will consider the matter again and show why it makes no essential difference for our purposes whether we include permutative conversions or not.
§ THE SENSE OF DERIVATIONS
Let us spell out at this point what exactly we will consider as the sense and also again the denotation of a derivation in our approach:
Definition of denotation:
The denotation of a derivation in a system with λ-term assignment is referred to by the end-term of the derivation.
Identity of denotation holds modulo belonging to the same equivalence class induced by the set of α-, β- and η-conversions of λ-terms, i.e. derivations that are denoted by terms belonging to the same equivalence class induced by these conversions are identical, they refer to the same proof object.[We use the more accurate formulation of “belonging to the same equivalence class" here instead of the formulation we used before of two terms “having the same normal form". The reason for this is that while these two properties coincide for most standard cases, they do not necessarily concur when it comes to Lindley's “general permutative conversions" or also to SC in general because in these cases the confluence property is not guaranteed. We want to thank one of the anonymous referees for indicating this important point.]
Definition of sense:
The sense of a derivation in a system with λ-term assignment consists of the set[One could also consider the question whether multi-sets are an even better choice here, which would of course yield a much stronger differentiation of senses. The reason why we consider sets instead of multi-sets is that to us the distinctions brought about by multi-sets, by e.g. a variable occurrence more or less, do not seem to go hand in hand with substantial differences in how inferences are built up.] of λ-terms that occur within the derivation.
Only a derivation made up of applications of correct inference rules, i.e. rules that have reduction procedures, can have sense.
§.§ Change of sense due to reducibility
Concerning a distinction between sense and denotation in the context of proofs, the rare cases where this is mentioned at all deal with derivations one of which is reducible to the other or with λ-terms which are β-convertible to the same term in normal form (cf. <cit.>, <cit.>, Restall 2017, p. 6).
Since Tranchini is the only one to spell out the part about sense in detail, we will briefly summarize his considerations.
As mentioned above, in his account, for a derivation to have sense means that it is made up of applications of correct inference rules.
The question to be asked then is of course what makes up correct inference rules?
Tranchini's answer is that inference rules are correct if they have reduction procedures available, i.e. a procedure to eliminate any maximal formula resulting from an application of an introduction rule immediately followed by an elimination rule of the same connective.
From a PTS point of view, applying reduction procedures can be seen as a way of interpreting the derivation because it aims to bring the derivation to a normal form, i.e. the form in which the derivation represents the proof it denotes most directly <cit.>.[Tranchini does not restrict his examination to derivations that normalize, though, but to the contrary, uses it to analyze non-normalizable derivations, like paradoxical ones.]
So the reduction procedures are the instructions telling us how to identify the denotation of the derivation, which for Tranchini means that they give rise to the sense of the derivation.
If we have two derivations denoting the same proof, for example, one in normal form and the other in a form that can be reduced to the former, we could say in Fregean terminology that they have the same denotation but differ in their sense because they denote the proof in different ways, one directly, the other indirectly.
So, we can take as an example the following two derivations, one in normal and one in non-normal form:
NDp ⊃ p
=1.2em
[r]⊃I
[x : p]
λx.x: p ⊃p
NDnon-normal p ⊃ p
=1.2em
[r]∧E
[r]∧I
[r]⊃I
[x : p]
λx.x: p ⊃p
[r]⊃I
[y : q]
λy.y: q ⊃q
⟨λx.x, λy.y ⟩: (p ⊃p) ∧(q ⊃q)
fst(⟨λx.x, λy.y ⟩): p ⊃p
The latter obviously uses an unnecessary detour via the maximal formula (p ⊃ p) ∧ (q ⊃ q), which is introduced by conjunction introduction and then immediately eliminated again, thus, producing different and more complex terms than the former derivation.
The derivation can be easily reduced to the former, though, which can be also seen by β-reducing the term denoting the formula to be proven:
fst(⟨λx.x, λy.y ⟩)
⇝λx.x
We can also give an example analogous to the one above, where a non-normal term (highlighted in bold) in SC is created by using the cut rule:[Note however, that the connection between the application of cut and the resulting non-normal term is necessary but not sufficient, i.e. there can be applications of cut not creating a non-normal term. A non-normal term is produced if both occurrences of the cut formula in the premises are principal.]
SC⊢ (p ∧ p) ⊃ (p ∨ p)
=1.2em
[r]⊃R
[r]∨R
[r]∧L
[r]W
[r]Rf
z : p ⊢z : p
z : p, x : p ⊢z : p
y : p ∧p ⊢fst(y) : p
y : p ∧p ⊢fst(y) : p ∨p
⊢λy.fst(y) : (p ∧p) ⊃(p ∨p)
SCcut⊢ (p ∧ p) ⊃ (p ∨ p)
=1.2em
[r]⊃R
[r]∨R
[r]cut
[r]C
[r]∧R
[r]∧L
[r]W
[r]Rf
z : p ⊢z : p
z : p, x : p ⊢z : p
y : p ∧p ⊢fst(y) : p
[r]∧L
[r]W
[r]Rf
z : p ⊢z : p
x : p, z : p ⊢z : p
y : p ∧p ⊢snd(y) : p
y : p ∧p, y : p ∧p ⊢⟨fst(y), snd(y)⟩: p ∧p
y : p ∧p ⊢⟨fst(y), snd(y)⟩: p ∧p
[r]∧L
[r]W
[r]Rf
z : p ⊢z : p
z : p, x : p ⊢z : p
y : p ∧p ⊢fst(y) : p
y : p ∧p ⊢fst⟨fst(y), snd(y)⟩: p
y : p ∧p ⊢fst⟨fst(y), snd(y)⟩ : p ∨p
⊢λy.fst⟨fst(y), snd(y)⟩ : (p ∧p) ⊃(p ∨p)
λy.fst⟨fst(y), snd(y)⟩
⇝λy.fst(y)
In this case again the two derivations are essentially the same because the latter can be reduced to the former by eliminating the application of the cut rule.
Again, the proof object they represent is thus the same, only the way of making the inference, represented by the different terms occurring within the derivation, differs, i.e. the sense is different.
§.§ Change of sense due to rule permutations
So far we only considered the case in which there is an identity of denotation but a difference in sense of derivations due to one being represented by a λ-term in non-normal form reducible to one in normal form.
However, we want to show that this is not the only case where we can make such a distinction.
This is also the reason why our approach differs from Tranchini's (who works solely in an ND system) in how we grasp the notion of sense of a derivation.
Following Tranchini, the derivation having sense at all depends on there being reduction procedures available for the rules that are applied in it.
Since we are also interested in a comparison of sense-and-denotation relations between ND and SC systems, our approach requires that there are reduction procedures available for the created terms.
Thereby we will be able to cover both systems at once.
Encoding the proof systems with λ-terms also makes the connection between changing the order of the rule applications and the sense-and-denotation distinction transparent, which is the other case we want to cover.
In ND with disjunction rules it is possible to have rule permutations producing derivations with end-terms identifiable by means of the permutative conversions.
In SC, however, there are more cases of rule permutations possible.
When the left disjunction rule is involved, this also leads to different - though γ-equal - terms; with the left conjunction or implication rule the end-term remains completely unchanged.
Consider e.g. the following three derivations in SC of the same sequent ⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r)):
SC_1⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r))
=1.2em
[r]⊃R
[r]∨L
[r]C
[r]∧R
[r]∧L
[r]W
[r]∨R
[r]Rf
q ⊢q
q ⊢p ∨q
q, r ⊢p ∨q
q ∧r ⊢p ∨q
[r]∧L
[r]W
[r]∨R
[r]Rf
r ⊢r
r ⊢p ∨r
q, r ⊢p ∨r
q ∧r ⊢p ∨r
q ∧r, q ∧r ⊢(p ∨q) ∧(p ∨r)
q ∧r ⊢(p ∨q) ∧(p ∨r)
[r]C
[r]∧R
[r]∨R
[r]Rf
p ⊢p
p ⊢p ∨q
[r]∨R
[r]Rf
p ⊢p
p ⊢p ∨r
p, p ⊢(p ∨q) ∧(p ∨r)
p ⊢(p ∨q) ∧(p ∨r)
(q ∧r) ∨p ⊢(p ∨q) ∧(p ∨r)
⊢((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r))
SC_2⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r))
=1.2em
[r]⊃R
[r]∨L
[r]C
[r]∧R
[r]∨R
[r]∧L
[r]W
[r]Rf
q ⊢q
q, r ⊢q
q ∧r ⊢q
q ∧r ⊢p ∨q
[r]∨R
[r]∧L
[r]W
[r]Rf
r ⊢r
q, r ⊢r
q ∧r ⊢r
q ∧r ⊢p ∨r
q ∧r, q ∧r ⊢(p ∨q) ∧(p ∨r)
q ∧r ⊢(p ∨q) ∧(p ∨r)
[r]C
[r]∧R
[r]∨R
[r]Rf
p ⊢p
p ⊢p ∨q
[r]∨R
[r]Rf
p ⊢p
p ⊢p ∨r
p, p ⊢(p ∨q) ∧(p ∨r)
p ⊢(p ∨q) ∧(p ∨r)
(q ∧r) ∨p ⊢(p ∨q) ∧(p ∨r)
⊢((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r))
SC_3⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r))
=1.2em
[r]⊃R
[r]C
[r]∧R
[r]∨L
[r]∧L
[r]W
[r]∨R
[r]Rf
q ⊢q
q ⊢p ∨q
q, r ⊢p ∨q
q ∧r ⊢p ∨q
[r]∨R
[r]Rf
p ⊢p
p ⊢p ∨q
(q ∧r) ∨p ⊢p ∨q
[r]∨L
[r]∧L
[r]W
[r]∨R
[r]Rf
r ⊢r
r ⊢p ∨r
q, r ⊢p ∨r
q ∧r ⊢p ∨r
[r]∨R
[r]Rf
p ⊢p
p ⊢p ∨r
(q ∧r) ∨p ⊢p ∨r
(q ∧r) ∨p, (q ∧r) ∨p ⊢(p ∨q) ∧(p ∨r)
(q ∧r) ∨p ⊢(p ∨q) ∧(p ∨r)
⊢((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r))
The difference between SC1 and SC2 (highlighted in bold) is that the order of applying the right disjunction rule and the left conjunction rule is permuted.
The difference between SC1 and SC3 (highlighted with underlining) is that the order of applying the right conjunction rule and the left disjunction rule is permuted.
The order of applying the right disjunction rule and the left conjunction rule stays fixed this time.
Encoded with λ-terms, though, we see that in the first case, comparing SC1 and SC2, the permutation of rule applications produces exactly the same end-term.
Both derivations have the same end-term, namely:
λ u. {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩}
SC_1⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r))
=1.2em
[r]⊃R
[r]∨L
[r]C
[r]∧R
[r]∧L
[r]W
[r]∨R
[r]Rf
y : q ⊢y : q
y : q ⊢y : p ∨q
y : q, z : r ⊢y : p ∨q
v : q ∧r ⊢fst(v) : p ∨q
[r]∧L
[r]W
[r]∨R
[r]Rf
z : r ⊢z : r
z : r ⊢z : p ∨r
y : q, z : r ⊢z : p ∨r
v : q ∧r ⊢snd(v): p ∨r
v : q ∧r, v : q ∧r ⊢⟨fst(v), snd(v) ⟩: (p ∨q) ∧(p ∨r)
v : q ∧r ⊢⟨fst(v), snd(v) ⟩: (p ∨q) ∧(p ∨r)
[r]C
[r]∧R
[r]∨R
[r]Rf
x : p ⊢x : p
x : p ⊢x : p ∨q
[r]∨R
[r]Rf
x : p ⊢x : p
x : p ⊢x : p ∨r
x : p, x : p ⊢⟨x, x⟩: (p ∨q) ∧(p ∨r)
x : p ⊢⟨x, x⟩: (p ∨q) ∧(p ∨r)
u : (q ∧r) ∨p ⊢ {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩} : (p ∨q) ∧(p ∨r)
⊢λu. {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩} : ((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r))
SC_2⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r))
=1.2em
[r]⊃R
[r]∨L
[r]C
[r]∧R
[r]∨R
[r]∧L
[r]W
[r]Rf
y : q ⊢y : q
y : q, z : r ⊢y : q
v : q ∧r ⊢fst(v) : q
v : q ∧r ⊢fst(v) : p ∨q
[r]∨R
[r]∧L
[r]W
[r]Rf
z : r ⊢z : r
y : q, z : r ⊢z : r
v : q ∧r ⊢snd(v) : r
v : q ∧r ⊢snd(v): p ∨r
v : q ∧r, v : q ∧r ⊢⟨fst(v), snd(v) ⟩: (p ∨q) ∧(p ∨r)
v : q ∧r ⊢⟨fst(v), snd(v) ⟩: (p ∨q) ∧(p ∨r)
[r]C
[r]∧R
[r]∨R
[r]Rf
x : p ⊢x : p
x : p ⊢x : p ∨q
[r]∨R
[r]Rf
x : p ⊢x : p
x : p ⊢x : p ∨r
x : p, x : p ⊢⟨x, x⟩: (p ∨q) ∧(p ∨r)
x : p ⊢⟨x, x⟩: (p ∨q) ∧(p ∨r)
u : (q ∧r) ∨p ⊢ {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩} : (p ∨q) ∧(p ∨r)
⊢λu. {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩} : ((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r))
Considering the second comparison between SC1 and SC3 the situation is different: here the permutation of rule applications leads to a different end-term.
In the end-term for SC1 and SC2 the pairing operation is embedded within the case expression, whereas in the end-term for SC3 the case expression is embedded within the pairing:
λ u.⟨ {v.fst(v) | x.x}, {v.snd(v) | x.x}⟩
SC_3⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r))
=1.2em
[r]⊃R
[r]C
[r]∧R
[r]∨L
[r]∧L
[r]W
[r]∨R
[r]Rf
y : q ⊢y : q
y : q ⊢y : p ∨q
y : q, z : r ⊢y : p ∨q
v : q ∧r ⊢fst(v) : p ∨q
[r]∨R
[r]Rf
x : p ⊢x : p
x : p ⊢x : p ∨q
u : (q ∧r) ∨p ⊢ {v.fst(v) | x.x} : p ∨q
[r]∨L
[r]∧L
[r]W
[r]∨R
[r]Rf
z : r ⊢z : r
z : r ⊢z : p ∨r
y : q, z : r ⊢z : p ∨r
v : q ∧r ⊢snd(v) : p ∨r
[r]∨R
[r]Rf
x : p ⊢x : p
x : p ⊢x : p ∨r
u : (q ∧r) ∨p ⊢ {v.snd(v) | x.x}: p ∨r
u : (q ∧r) ∨p, u : (q ∧r) ∨p ⊢⟨ {v.fst(v) | x.x}, {v.snd(v) | x.x}⟩: (p ∨q) ∧(p ∨r)
u : (q ∧r) ∨p ⊢⟨ {v.fst(v) | x.x}, {v.snd(v) | x.x}⟩: (p ∨q) ∧(p ∨r)
⊢λu.⟨ {v.fst(v) | x.x}, {v.snd(v) | x.x}⟩: ((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r))
When we take a look at how the term-annotated rules must be designed in order to have a correspondence to the respective rules in ND, we see why some permutations of rule applications lead to different end-terms, while others do not; and why SC is in general more flexible in this respect than ND.
In SC the left conjunction rule as well as the left implication rule are substitution operations, i.e. they can change their place in the order without affecting the basic term structure because only in the inner term structure terms are substituted with other terms.[For ⊃L the only exception is when an application of this rule is permuted with an application of ∨L, which creates a different, though γ-convertible term.]
In ND, on the other hand, there are no substitution operations used in the term assignment, i.e. for each rule application a new basic term structure is created.
How is this related to the distinction between sense and denotation?
In cases like SC1 vs. SC2 the way the inference is given differs, which can also be seen in different terms annotating the formulas occurring within the derivation: with otherwise identical terms in the two derivations y and z only occur in SC1, while fst(v) and snd(v) only occur in SC2.
However, the resulting end-term stays the same, thus, we would describe the difference between these derivations as a difference in sense but not in denotation.
In other cases, when disjunction elimination or the left disjunction rule is involved, permutation of rule applications can lead to a different end-term, as we see above in SC1 vs. SC3.
Whether this corresponds to a difference in denotation depends on whether we accept γ-conversions to be identity-preserving.
What all cases have in common, though, is that rule permutation always leads to a difference in sense of the given derivations because the sets of terms occurring within the derivations differ from each other.
§.§ Philosophical motivation
Let us have a look at how the Fregean conception of sense is received in the literature in order to show the philosophical motivation for adopting such a definition of sense for derivations.
According to Dummett <cit.>, Fregean sense is to be considered as a procedure to determine its denotation.[This idea of sense as procedures also occurs in more recent publications like <cit.> or <cit.>.]
Girard <cit.>, in a passage about sense and denotation and the relation between proofs and programs, mentions that the sense is determined by a “sequence of instructions" and when we see in this context terms as representing programs and “the purpose of a program [...] to calculate [...] its denotation" (ibid., p. 17), then it seems plausible to view the terms occurring within the derivation, decorating the intermediate steps in the construction of the complex end-term that decorates the conclusion, as the sense of that derivation.
Tranchini holds the reduction procedures to be the sense because these `instructions' lead to the term in normal form.
However, in our framework - because we do not only consider normal vs. non-normal cases - it seems more plausible to look at the exact terms occurring within the derivations and view them as representing the steps in the process of construction encoding how the derivation is built up and leading us to the denotation, the end-term.
For us it is therefore only a necessary requirement for the derivation to have sense to contain only terms for which reduction procedures are available but it does not make up the sense.
In the case of rule permutation we can then say that the proof is essentially the same but the way it is given to us, the way of inference, differs: i.e. the sense differs.
This can be read off from the set of terms that occur within the derivation: they end up building the same end-term, but the way it is built differs, the procedures to determine the denotation differ.
Thus, this allows us to compare differences in sense within one proof system as well as over different proof systems.
Troelstra and Schwichtenberg <cit.> e.g. give an example of two derivations in SC producing the same end-term in different ways to show that just from the variables and the end-term we cannot read off how the derivation is built up:[For simplicity we omit the weakening steps that would strictly seen have to precede the applications of the ∧L-rule.]
SC1⊢ (s ∧ p) ⊃ ((q ∧ r) ⊃ (p ∧ q))
=1.2em
[r]⊃R
[r]⊃R
[r]∧L
[r]∧L
[r]∧R
[r]Rf
x : p ⊢x : p
[r]Rf
y : q ⊢y : q
x : p, y : q ⊢⟨x, y ⟩: p ∧q
x : p, z : q ∧r ⊢⟨x, fst(z) ⟩: p ∧q
u: s ∧p, z : q ∧r ⊢⟨snd(u), fst(z) ⟩: p ∧q
u : s ∧p ⊢λz.⟨snd(u), fst(z) ⟩: (q ∧r) ⊃(p ∧q)
⊢λu.λz.⟨snd(u), fst(z) ⟩: (s ∧p) ⊃((q ∧r) ⊃(p ∧q))
SC2⊢ (s ∧ p) ⊃ ((q ∧ r) ⊃ (p ∧ q))
=1.2em
[r]⊃R
[r]⊃R
[r]∧L
[r]∧L
[r]∧R
[r]Rf
x : p ⊢x : p
[r]Rf
y : q ⊢y : q
x : p, y : q ⊢⟨x, y ⟩: p ∧q
u : s ∧p, y: q ⊢⟨snd(u), y ⟩: p ∧q
u: s ∧p, z : q ∧r ⊢⟨snd(u), fst(z) ⟩: p ∧q
u : s ∧p ⊢λz.⟨snd(u), fst(z) ⟩: (q ∧r) ⊃(p ∧q)
⊢λu.λz.⟨snd(u), fst(z) ⟩: (s ∧p) ⊃((q ∧r) ⊃(p ∧q))
The senses of these derivations would be the following:
Sense of SC1:
{x, y, z, u, ⟨ x, y ⟩, ⟨ x, fst(z) ⟩, ⟨ snd(u), fst(z) ⟩, λ z.⟨ snd(u), fst(z) ⟩,
λ u.λ z.⟨ snd(u), fst(z) ⟩}
Sense of SC2:
{x, y, z, u, ⟨ x, y ⟩, ⟨ snd(u), y ⟩, ⟨ snd(u), fst(z) ⟩, λ z.⟨ snd(u), fst(z) ⟩,
λ u.λ z.⟨ snd(u), fst(z) ⟩}
The two sets only differ with regard to the underlined terms, otherwise they are identical.
Thus, they only differ in the order in which the two left conjunction rules are applied.
For the resulting end-term this is inessential, but we can see that when taking the sense, and not only the end-terms, i.e. the denotation, into account, it is indeed possible to read off the structure of the derivations.
As noted above (examples on p. 6), the term annotation of the calculi makes this structure of derivations explicit so that we can differentiate between derivations which would otherwise look identical.
As several authors point out, this is a desirable feature if one is not only interested in mere provability but wants to study the structure of the derivations in question (cf. <cit.>, <cit.>) and also, for simplicity, if one wants to compare proof systems of ND and SC with each other <cit.>.
Since we are interested in both of these points, it seems the right choice for our purposes to consider the annotated versions of the calculi and that is also why these annotated versions are indeed needed for our notions of sense and denotation.
Of course, one could argue that the underlying structure is still the same in the non-annotated versions and can be made explicit by other means, too, like showing the different generalizations of the derivations, but still, we do not see how in these calculi our notions could be easily applied.
Another issue that needs to be considered is the one of identity of senses, i.e. synonymy.
Therefore, we want to extend our definition of sense given above with an addition:
If a sense-representing set can be obtained from another by uniformly replacing (respecting the usual capture-avoiding conventions) any occurrence of a variable, bound or free, by another variable of the same type, they express the same sense.
What we ensure with this point is just that it does not (and should not) matter which variables one chooses for which proposition as long as one does it consistently.
So, it does not make a difference whether we have
2
ND1p ⊃ (q ⊃ p)
=1.2em
[r]⊃I
[r]⊃I
[x : p]
λz.x: q ⊃p
λx. λz.x: p ⊃(q⊃p)
Sense1: {x, λ z.x, λ x. λ z.x}
or
2
ND2p ⊃ (q ⊃ p)
=1.2em
[r]⊃I
[r]⊃I
[y : p]
λz.y: q ⊃p
λy. λz.y: p ⊃(q⊃p)
Sense2: {y, λ z.y, λ y. λ z.y}
Sense1 and Sense2 represent the same sense.
Or to give another example (pointed to by one of the anonymous referees) where we have free variables occurring within the derivation but not appearing in the end-term: If one would replace all occurrences of the free variable y by the variable w in derivation SC1⊢ (s ∧ p) ⊃ ((q ∧ r) ⊃ (p ∧ q)) (cf. above), then this would make no difference to the sense according to our definition since the sense-representing sets would be obtained from replacing y by w.
This also fits the Fregean criterion of two sentences' identical sense, as Sundholm <cit.> depicts it within a broader analysis: two propositions express the same sense if it is not possible to hold different epistemic attitudes towards them, i.e. “if one holds the one true, one also must hold the other one true, and vice versa".
Whereas, if we have two sentences which only differ in two singular terms, referring to the same object but differing in sense, we can easily hold the one sentence to be true, while thinking the other is false, if we do not know that they are referring to the same object.
With proofs it is the same: Looking at ND1p ⊃ (q ⊃ p) and ND2p ⊃ (q ⊃ p) we may not know whether the derivation is valid or not, we do know, however, that if one is a valid derivation then so is the other.
With derivations differing in sense this is not so straightforward.
For Frege this point of considering cases where intensionality is directed towards sentences was crucial to develop his notion of sense, so the question arises how we can explain cases of intensionality directed towards proofs with our notions of sense and denotation.
Let us suppose we have two denotationally-identical proofs which are represented by two different derivations 𝒟 and 𝒟'.
In this case it could happen that a (rational) person believes that derivation 𝒟 is valid but does not believe that derivation 𝒟' is valid.
How can we account for that?
One explanation would be of course to point to the difference in linguistic representation.
After all, it can just be the case that one way of writing down a proof is more accessible to the person than another (they may not be familiar with a certain proof system, for example).
This would amount to letting the linguistic representation, the signs, collapse with the sense of a derivation.
However, then we would have no means to distinguish this case from cases in which we want to argue that it is not justified for a rational person to have different propositional attitudes towards propositions which are about derivations differing insignificantly from each other, like in the cases of ND1p ⊃ (q ⊃ p) and ND2p ⊃ (q ⊃ p) above.
For Frege <cit.> the referent of an expression in an intensional context is not its customary referent, i.e. the object it refers to or the truth value in the case of sentences, but its customary sense.
Here the situation is the same: What is referred to in such a setting, when speaking about the attitudes of a person towards propositions about derivations, is not the proof objects (which are identical in our situation) but their senses, which are in this context represented by the sets of terms encoding the steps of construction.
It seems plausible then to say that when the construction steps differ in two derivations, a person can have different attitudes towards propositions about them, because the different construction steps may lead to this person grasping the one derivation, while not understanding the other.
§ ANALOGY TO FREGE'S CASES
Let us finally compare how our conception of sense and denotation in the context of proofs fits the distinction Frege came up with for singular terms and sentences.
We can have the following two cases with Frege's distinction: firstly (cf. <cit.>), there can be different signs corresponding to exactly one sense (and then of course also only one denotation).
In the case of singular terms an example would be “Gottlob's brother” and “the brother of Gottlob".
The sense, the way the denoted individual object is given to us, is the same because there is only a minor grammatical difference between the two expressions.
More frequently, this occurs in comparing different languages, though, taking singular terms which express exactly the same sense only using different words, like “the capital of France" and “die Hauptstadt Frankreichs".
In the case of sentences an example would be changing from an active to a passive construction without changing the emphasis of the sentence; an example from Frege is the following: “M gave document A to N", “Document A was given to N by M" <cit.>.
In the case of proofs, finally, an example would be the following case:
ND(p∨ p) ⊃ (p∧ p)
=1.2em
[r]⊃I^3
[r]∧I
[r]∨E^1
[y : p ∨p]^3
[x : p]^1
[x : p]^1
{x.x | x.x} : p
[r]∨E^2
[y : p ∨p]^3
[x : p]^2
[x : p]^2
{x.x | x.x} : p
⟨ {x.x | x.x}, {x.x | x.x}⟩: p ∧p
λy.⟨ {x.x | x.x}, {x.x | x.x}⟩ : (p ∨p) ⊃(p ∧p)
SC⊢ (p∨ p) ⊃ (p ∧ p)
=1.2em
[r]⊃R
[r]C
[r]∧R
[r]∨L
[r]Rf
x : p ⊢x : p
[r]Rf
x : p ⊢x : p
y : p ∨p ⊢ {x.x | x.x} : p
[r]∨L
[r]Rf
x : p ⊢x : p
[r]Rf
x : p ⊢x : p
y : p ∨p ⊢ {x.x | x.x}: p
y : p ∨p , y : p ∨p ⊢⟨ {x.x | x.x}, {x.x | x.x} ⟩: p ∧p
y : p ∨p ⊢⟨ {x.x | x.x}, {x.x | x.x}⟩: p ∧p
⊢λy.⟨ {x.x | x.x}, {x.x | x.x}⟩: (p ∨p) ⊃(p ∧p)
Sense:
{x, y, {x.x | x.x}, ⟨ {x.x | x.x}, {x.x | x.x}⟩,
λ y.⟨ {x.x | x.x}, {x.x | x.x}⟩}
Or to give another example:
NDp ⊃ (p ⊃ (p ∧ p))
=1.2em
[r]⊃I^2
[r]⊃I^1
[r]∧I
[x : p]^2
[y : p]^1
⟨x, y ⟩: p ∧p
λy.⟨x, y ⟩: p ⊃(p ∧p)
λx.λy.⟨x, y ⟩: p ⊃(p ⊃(p ∧p))
SC⊢ p ⊃ (p ⊃ (p ∧ p))
=1.2em
[r]⊃R
[r]⊃R
[r]∧R
[r]Rf
x : p ⊢x : p
[r]Rf
y : p ⊢y : p
x : p, y : p ⊢⟨x, y ⟩: p ∧p
x : p ⊢λy.⟨x, y ⟩: p ⊃(p ∧p)
⊢λx.λy.⟨x, y ⟩: p ⊃(p ⊃(p ∧p))
Sense: {x, y, ⟨ x, y ⟩, λ y.⟨ x, y ⟩, λ x.λ y.⟨ x, y ⟩}
In these cases derivations can consist of different signs, namely by having one representation in SC and one in ND, which do not differ in sense nor in denotation, since they both contain exactly the same terms and produce the same end-term.
This comparison between different proof systems seems to fit nicely with Frege's <cit.> comment on “the same sense ha[ving] different expressions in different languages".
However, as we have seen above with the examples ND1p ⊃ (q ⊃ p) and ND2p ⊃ (q ⊃ p), this case can also occur within the same proof system.
One could wonder whether there should not be a differentiation between the senses of the derivations in the first example since it seems that different rules are applied: in SC⊢ (p∨ p) ⊃ (p ∧ p) we have an application of contraction, which we do not have in ND(p∨ p) ⊃ (p∧ p).
This would also question whether our definition of sense distinguishes and identifies the right amount of cases.
We do believe that this is the case, though, because in the first example, where there is an application of the contraction rule in SC, there is also a multiple assumption discharge in the ND-derivation, which is generally seen as the corresponding procedure, just as cases of vacuous discharge of assumptions in ND correspond to the application of weakening in SC.
So just as in different languages of course not exactly the same expressions are used, here too, the rules differ from ND to SC but since the corresponding procedures are used, one can argue that the sense does not differ for that reason.
Another case that can occur according to Frege (ibid.) is that we have one denotation, i.e. one object a sign refers to, but different senses.
An example for this would be his famous “morning star" and “evening star" comparison, where both expressions refer to the same object, the planet Venus, but the denoted object is given differently.
On the sentence level this would amount to exchanging singular terms in a sentence by ones which have the same denotation: “The morning star is the planet Venus" and “The evening star is the planet Venus".
The denotation of the sentence - with Frege: its truth value - thus stays the same, only the sense of it differs, the information is conveyed differently to us.
For our proof cases we can say that this case is given when we have syntactically different derivations, be it in one or in different proof systems, which have end-terms belonging to the same equivalence class induced by the set of α-, β- and η-conversions.
Thus, examples would be corresponding proofs in ND and SC, which share the same end-term, but contain different terms occurring within the derivations.
The reason for this to happen seems that in SC often more variables are necessary than in ND.
If we compare derivations within ND, one definite case in which we have the same denotation but a different sense is between equivalent but syntactically distinct derivations, e.g. non-normal and normal derivations, one reducible to the other.
Another case up for debate would be the one with rule permutations due to disjunction elimination.
Within SC we can have two cases: one due to rule permutation, one due to applications of cut.
For the first case, where the inference could be given in a different way, although ending on the same term, we gave examples above (cf. p. 12 and 14f.).
However, it is worth mentioning that our distinction still captures the usual distinction, the second case, where it is said that two derivations, one containing cut and the other one in cut-free form (as a result of cut-elimination applied to the former), have the same denotation but differ in sense:
SC⊢ (p∧ p) ⊃ (p∨ p)
=1.2em
[r]⊃R
[r]∨R
[r]∧L
[r]W
[r]Rf
z : p ⊢z : p
z : p, x : p ⊢z : p
y : p∧p ⊢fst(y) : p
y : p ∧p ⊢fst(y) : p ∨p
⊢λy.fst(y) : (p ∧p) ⊃(p ∨p)
Sense: {z, x, y, fst(y), fst(y), λ y.fst(y)}
SCcut⊢ (p∧ p) ⊃ (p∨ p)
=1.2em
[r]⊃R
[r]cut
[r]∧L
[r]W
[r]Rf
z : p ⊢z : p
z : p, x : p ⊢z : p
y : p ∧p ⊢fst(y) : p
[r]∨R
[r]Rf
z : p ⊢z : p
z : p ⊢z: p ∨p
y : p ∧p ⊢fst(y) : p ∨p
⊢λy.fst(y): (p ∧p) ⊃(p ∨p)
Sense: {z, x, y, fst(y), z, fst(y), λ y.fst(y)}
As mentioned above (fn 14), cut does not need to create a non-normal term, as it is the case here, but still any application of cut will necessarily change the sense of a derivation as opposed to its cut-free form.
Finally, cases that need to be avoided in a formal language according to Frege <cit.> would be to have one sign, corresponding to different senses, or on the other hand, one sense corresponding to different denotations.
As he mentions, these cases of course occur in natural languages but should not happen in formal ones, so it should also not be possible in our present context, for sure.
Fortunately, this cannot happen in the context of our annotated proof systems, either, since the signs (taken to be the derivation as it is written down) always express at most one sense in our annotated system, and likewise the sense always yields a unique denotation since the end-term is part of the sense-denoting set.[Another question would be whether there can be signs without any sense at all. Frege <cit.> dismisses this case, as well, with a remark that we need at least the requirement that our expressions are “grammatically well-formed". Tranchini <cit.> gives a good analogy pointing to the notorious connective playing this role in the case of proofs.]
§ CONCLUSION
The context in which Frege considered sense and denotation was the context of identity.
Likewise, we argued in this paper, if we use term-annotated calculi, we can also say something about proof identity: identity of proofs over different calculi or within the same calculus consists in having end-terms that belong to the same equivalence class induced by the set of α-, β- and η-conversions.
In ND this can happen when we have the same proof in normal and non-normal form, in SC this can happen when we have the same proof using cut and in cut-free form but also when there are forms of rule permutations where an application of the ∧L-rule or the ⊃L-rule switches place with another rule.
Including disjunction in our language creates for both calculi the additional question of whether rule permutations including disjunction elimination (resp. the left disjunction rule) lead to a different proof, or whether these proofs should be identified.
We are more interested in sense, however, and here we can conclude that what in all these cases changes is the sense of the derivation in question.
Finally, considering the question of identity of sense, i.e. synonymy, and trying to follow Frege's conception on this matter, too, we can say the following: if two derivations are supposed to be identical in sense, this means that the way the inference is given is essentially the same, so the set of terms building up the end-term must be the same.
The end-term itself does not necessarily tell us anything about the structure of the proof.
Sense, on the other hand, is more fine-grained in that the set of terms occurring within the derivation reflects how the derivation is built up.
Especially in SC, where we can have different orders of rule applications leading up to the same end-term, the sense gives us means to distinguish on a more fine-grained level.
BarendregtGhilezan Barendregt, H., & Ghilezan, S. (2000). Lambda terms for natural deduction, sequent calculus and cut elimination. Journal of Functional Programming, 10(1), 121–134.
Groote De Groote, P. (1999). On the Strong Normalisation of Natural Deduction with Permutation-Conversions. In P. Narendran, & M. Rusinowitch (Eds), Rewriting Techniques and Applications: RTA 1999 (pp. 45–59). Berlin/Heidelberg: Springer.
Dosen2003 Došen, K. (2003). Identity of Proofs Based on Normalization and Generality. Bulletin of Symbolic Logic, 9, 477–503.
Dosen2008 Došen, K. (2008). Cut Elimination in Categories. Springer.
Dummett Dummett, M. (1973). Frege: Philosophy of Language. New York: Harper & Row.
DJM Duží, M., Jespersen, B., & Materna, P. (2010). Procedural Semantics for
Hyperintensional Logic: Foundations and Applications of Transparent Intensional Logic. Springer.
Francez Francez, N. (2017). On harmony and permuting conversions. Journal of Applied Logic, 21, 14–23.
Frege1 Frege, G. (1948) [1892]. Sense and Reference. The Philosophical Review, 57(3), 209–230.
Frege2 Frege, G. (1979). Posthumous Writings. Oxford: Basil Blackwell.
Friedman Friedman, H. (1975). Equality between functionals. In R. Parikh (Ed.), Logic colloquium: Lecture notes in mathematics 453 (pp. 23–37). Berlin/Heidelberg: Springer.
Girard Girard, J.-Y. (1989). Proofs and Types. Cambridge: Cambridge University Press.
Hacking Hacking, I. (1979). What is Logic? The Journal of Philosophy, 76(6), 285–319.
Herbelin Herbelin, H. (1994). A Lambda-calculus Structure Isomorphic to Gentzen-style Sequent Calculus Structure. Computer Science Logic, 61–75.
Kreisel Kreisel, G. (1971). A survey of proof theory II. In J.E. Fenstad (Ed.), Proceedings of the Second Scandinavian Logic Symposium (pp. 109–170). Amsterdam: North-Holland.
Lindley Lindley, S. (2007). Extensional Rewriting with Sums. In S. Ronchi Della Rocca (Ed.), Typed Lambda Calculi and Applications: TLCA 2007 (pp. 255–271). Berlin/Heidelberg: Springer.
M-L Martin-Löf, P. (1975). About Models for Intuitionistic Type Theories and the Notion of Definitional Equality. In S. Kanger (Ed.), Proceedings of the Third Scandinavian Logic Symposium (pp. 81–109). Amsterdam: North-Holland.
Muskens Muskens, R. (2005). Sense and the Computation of Reference. Linguistics and Philosophy, 28(4), 473–504.
NegrivonPlato Negri, S., & von Plato, J. (2001). Structural Proof Theory. Cambridge/New York: Cambridge University Press.
Pfenning Pfenning, F. (2000). Structural Cut Elimination: I. Intuitionistic and Classical Logic. Information and Computation, 157, 84–141.
Pottinger Pottinger, G. (1977). Normalization as a homomorphic image of cut-elimination. Annals of Mathematical Logic, 12, 323–357.
Prawitz1965 Prawitz, D. (1965). Natural Deduction. Stockholm: Almqvist & Wiksell.
Prawitz1971 Prawitz, D. (1971). Ideas and results in proof theory. In J.E. Fenstad (Ed.), Proceedings of the Second Scandinavian Logic Symposium (pp. 235–307). Amsterdam: North-Holland.
SU Sørensen, M., & Urzyczyn, P. (2006). Lectures on the Curry-Howard Isomorphism. Amsterdam: Elsevier Science.
Statman Statman, R. (1983). λ-definable functionals and βη conversion. Archiv für Mathematische Logik, 23, 21–26.
Sundholm Sundholm, G. (1994). Proof-Theoretical Semantics and Fregean Identity Criteria for Propositions. The Monist, 77(3), 294–314.
Tranchini2016 Tranchini, L. (2016). Proof-theoretic semantics, paradoxes and the distinction between sense and denotation. Journal of Logic and Computation, 26(2), 495–512.
Tranchini2018 Tranchini, L. (2018). Stabilizing Quantum Disjunction. Journal of Philosophical Logic, 47, 1029–1047.
TS Troelstra, A., & Schwichtenberg, H. (2000). Basic Proof Theory. 2nd ed., Cambridge: Cambridge University Press.
Urban Urban, C. (2014). Revisiting Zucker's Work on the Correspondence Between Cut-Elimination and Normalisation. In L. Pereira, E. Haeusler, & V. de Paiva (Eds), Advances in Natural Deduction: A Celebration of Dag Prawitz's Work (pp. 31–50). Dordrecht: Springer.
Wideback Widebäck, F. (2001). Identity of Proofs. Stockholm: Almquist & Wiksell International.
Zucker Zucker, J. (1974). The correspondence between cut-elimination and normalization. Annals of Mathematical Logic, 7, 1–112.
|
http://arxiv.org/abs/2307.04010v1 | 20230708164045 | Understanding the Efficacy of U-Net & Vision Transformer for Groundwater Numerical Modelling | [
"Maria Luisa Taccari",
"Oded Ovadia",
"He Wang",
"Adar Kahana",
"Xiaohui Chen",
"Peter K. Jimack"
] | physics.flu-dyn | [
"physics.flu-dyn",
"cs.CE",
"cs.LG"
] |
[
Kipton Barros
August 12, 2023
===================
[]School of Civil Engineering, University of Leeds, Leeds, UK, Email: [email protected].
[]Department of Applied Mathematics, Tel-Aviv University, Tel-Aviv, Israel.
[]School of Computing, University of Leeds, Leeds, UK.
[]Department of Applied Mathematics, Tel-Aviv University, Tel-Aviv, Israel.
[]School of Civil Engineering, University of Leeds, Leeds, UK.
[]School of Computing, University of Leeds, Leeds, UK.
[
Kipton Barros
August 12, 2023
===================
This paper presents a comprehensive comparison of various machine learning models, namely U-Net <cit.>, U-Net integrated with Vision Transformers (ViT) <cit.>, and Fourier Neural Operator (FNO) <cit.>, for time-dependent forward modelling in groundwater systems. Through testing on synthetic datasets, it is demonstrated that U-Net and U-Net + ViT models outperform FNO in accuracy and efficiency, especially in sparse data scenarios. These findings underscore the potential of U-Net-based models for groundwater modelling in real-world applications where data scarcity is prevalent.
§ INTRODUCTION
Groundwater numerical models, such as MODFLOW <cit.>, are crucial for water resource management, although they are computationally demanding. To alleviate this, surrogate modelling through data-driven methods offers efficient approximations of these complex numerical techniques.
Neural Operators <cit.>, particularly the Fourier Neural Operator (FNO) <cit.>, have been at the forefront of recent advances, having shown potential to approximate arbitrary continuous functions.
However, the computational demand of FNO is particularly high during training phase while these neural operators require architectural enhancements to deliver promising results in subsurface problems <cit.>. This is evident in the work of Wen et al. <cit.>, where the integration of FNO with U-Net architecture showed improved accuracy, speed, and data efficiency in multiphase flow problems. However, Gupta and Brandstetter's work <cit.>, showing that U-Net outperforms FNOs across various fluid mechanics problems, raises a question about the necessity of neural operators when the vanilla U-Net architecture already exhibits remarkable performance.
Recently, transformers <cit.> have seen considerable success in various fields, including physical systems <cit.>, for which the datasets are typically smaller compared to other domains. Only one study explores the use of transformers in groundwater modeling <cit.>, demonstrating that the models were outperformed by both GRU and LSTM models to predict groundwater levels across various stations in France with meteorological and hydrological data.
Finally, the integration of U-Net with Transformers, as exemplified in studies like TransUNet <cit.> and ViTO <cit.>, has demonstrated their utility across a broad range of applications, particularly in the field of medical image segmentation and operator learning for inverse PDE problems. Yet, the applicability of these combinations in addressing time-dependent forward problems, real-world data scenarios, and in situations with sparse data, remain areas yet to be fully explored.
Several studies, such as the one by Brakenhoff et al. <cit.>, primarily focus on individual time series when analysing the impact of various hydrological stressors, including pumping rates, precipitation excess, and river stage variations, on groundwater levels of individual monitoring wells. While this approach provides valuable insights, it does not account for spatial correlations, thereby limiting its use to existing time series or monitoring wells. Similarly, previous comparisons have been predominantly limited to specific models like LSTM, CNNs and NARX in the context of groundwater level forecasting <cit.>, leaving room for broader explorations.
In this paper, we present a comprehensive comparison among models—specifically U-Net, U-Net integrated with Vision Transformers (U-Net+ViT), and Fourier Neural Operator (FNO)—for their efficacy in modeling time-dependent forward and inverse problems in groundwater systems. We test our model extensively on synthetic datasets, simulating conditions from the Overbetuwe region in the Netherlands, including sparse data scenarios. We show that both U-Net and U-Net+ViT are particularly well-suited to these important sparse data scenarios, with the addition of the Transformer providing enhanced predictive capability in many cases.
§ METHODOLOGY
§.§ Example of study site and data
This subsection provides context and rationale for our study via an example case study based upon the polder region of Overbetuwe in the Netherlands (Figure <ref>). This region showcases the characteristic Dutch system of water management where the area is divided into several polders in a mix of agriculture, nature, and urban environments. Alongside its sparse data and heterogeneous soil, these unique characteristics underscore the inherent complexities of water management in similar settings, making this dataset a suitable choice for our research. The subsoil is primarily composed of clay and sandy clay, with soil properties being determined via borehole and cone penetration tests. The study area features numerous observation wells for monitoring groundwater heads while well fields (indicated as groundwater usage facilities in the figure) are utilized for the extraction of drinking water. The work of Brakenhoff et al. <cit.> considers a dataset consisting of 250 head time series, with daily recordings starting from the year 1990 and drawdown attributed to the extraction from up to four well fields.
For the purposes of this study, we employ synthetic data to validate the proposed methodology, with the intention to subsequently apply the validated method to the real-world data of the Overbetuwe region. Figure <ref> represents a sample of the high-fidelity labeled dataset, which is constructed using the U.S. Geological Survey (USGS) finite-difference flow model, MODFLOW. The model is composed of a single-layer representation of a confined aquifer with a 128×128 grid.
The aquifer's heterogeneity is reflected through varying horizontal hydraulic conductivity within the bounds k ∈[0.1, 0.5] m/d. The hydraulic conductivity fields in our study are created using random fields which are then thresholded to delineate different classes.
A maximum of ten pumping wells are extracting water with variable rates in the range Q ∈[0, 30] m^3/d over a simulation period of T = 10 days. The pumping wells are located in random locations which vary for each sample. The boundary conditions are delineated as Dirichlet, with the head equal to zero, mimicking a polder encircled by ditches where a stable water level is maintained through a comprehensive network of pumping stations.
The datasets consist of N_train = 5000 training instances and N_test = 1000 testing instances. To mirror the inherent sparsity of real-world data, a data selection strategy is adopted for the test dataset. The locations of the boreholes for estimating the hydraulic conductivity are chosen following a radial distribution pattern, and a helical pattern is used for the wells monitoring hydraulic head (Figure <ref>).
§.§ Architectures
The architectures of the three models under comparison in this study encompass the U-Net structure, a U-Net with attention mechanism in the bottleneck, and the Fourier neural operator (FNO).
The U-Net architecture is designed with an encoder-decoder structure where the decoder receives the upsampled feature map, which is then concatenated with the corresponding feature map from the encoder through a skip connection. Detailed diagrams of the the U-Net encoder and decoder can be found in Figures <ref> and <ref> in Appendix A. The encoder consists of three bottleneck blocks, where each block utilizes three layers of Conv2d, Instance Normalization, and GELU activation to extract spatial features. These blocks increase the number of channels by a factor of 2 and perform downsampling with a stride of 2. The decoder is composed of a series of upsampling blocks, where each block consists of a bilinear upsampling operation (Upsample), followed by a double convolution operation. Each convolution within the decoderis followed by Instance Normalization and GELU activation function. The bottleneck consists on a single convolutional layer. In the time-dependent scenario, the time series data of the historical pumping rates is processed through two layers of feed-forward neural network (FNN) prior to being concatenated to the input for the latent space representation (Figure <ref>).
The second model, here called UNet+ViT, employs the Vision Transformer (ViT) <cit.>, in the latent space representation of the U-Net, as per implementation of TransUNet <cit.> and ViTO <cit.>. The input is tokenized into a sequence of flattened 2D patches, each of size 1×1. Positional information is retained by employing trainable convolutional projection to learn and add specific position embeddings to the patch embeddings. The structure of the Transformer includes L blocks, with each block comprising Multi-Head Attention (MSA) and FNN. This configuration involves the use of 2 blocks, each with 2 Multihead Self-Attentions, and a FNN composed of 128 neurons. For a more detailed visualization of the Vision Transformer, attention block, and multihead attention, please refer to Appendix A, Figure <ref>.
The Fourier neural operator (FNO) <cit.> model leverages the fast Fourier Transform to parameterize the integral kernel directly in the Fourier space. The implementation of FNO for the 2D Darcy Flow problem as presented in <cit.> is followed in this study. The total amount of parameters of FNO corresponds to 2.38 million, that is 15 times more than UNet+ViT (151k) and 17 times more than UNet (137k).
§ RESULTS
§.§ Forward problem with sparse observations
This section presents the prediction of the hydraulic head at sparse monitoring wells after a constant 10-day pumping period under two different training conditions. We employ distinct sampling strategies for both input and output data in our methodology. Our training data is sampled from a regular quadratic grid, while for testing we have explored other arrangements, such as radial and helical, to understand their potential impact on the prediction performance.
In the first scenario, training is conducted using sparse data, with a spacing of 20 grid points for the input hydraulic conductivity field and a spacing of 8 for the output hydraulic head. Testing is then carried out on sparse data points, following the radial and helical patterns delineated in subsection <ref>. The resulting root mean square error (RMSE) is found to be 5.2 × 10^-2, 3.5 × 10^-2 and 8.1 × 10^-2 for the vanilla U-Net, the UNet+ViT models and FNO respectively. These results underline the superior performance of the UNet+ViT model in handling sparse data, exhibiting a lower RMSE compared to both the vanilla U-Net and the FNO models.
In contrast, when training is performed using the entire field and testing on the same sparse dataset, the error marginally escalates to 3.9 × 10^-1 for FNO, 3.8 × 10^-1 for UNet and 3.6 × 10^-1 for UNet+ViT model. This outcome is anticipated considering the training set exhibits sparsity in the first scenario, but not in the latter. Additionally, Figure <ref> displays the prediction over the entire domain, resulting in a lower RMSE of 1.0 × 10^-2 for FNO, 1.7 × 10^-2 and 1.9 × 10^-2 for the vanilla U-Net and UNet+ViT models, respectively. The FNO model, while superior when dealing with full data, exhibits the highest predictive error under sparse data observations. These results highlight the practical advantages of the U-Net and especially UNet+ViT model in real-world scenarios for which data sparsity is common.
It should be noted that traditional simpler neural networks and other machine learning techniques may not provide adequate solutions for this specific problem. This assertion is backed by a comparison of the results from a fully connected neural network, a linear regression model and a random forest, detailed in Appendix <ref>. Despite the substantial number of trainable parameters, reaching 51.17 million, inherent to the fully connected neural network and the application of linear regression and random forest, these methods significantly underperform compared to the U-Net, the UNet+ViT models, and FNO.
§.§ Identification of pumping wells
In this section, we focus on an inverse problem: specifically the identification of pumping wells. This task requires determining the locations and rates of pumping wells based on the observed hydraulic heads. Throughout these experiments, we employ a single hydraulic conductivity field, which, while spatially varying, remains identical across all samples within the dataset.
In evaluating the performance of our models, we use both RMSE and accuracy. The RMSE calculates the average difference between the true and the predicted value for each pump location in the test dataset, giving a quantitative measure of the prediction error. Complementing this, the accuracy was determined by counting the proportion of correct pump predictions, where a prediction is considered correct if the predicted and actual pump locations align. This gives a sense of how often the model correctly identifies the location of pumps.
The U-Net model performs optimally, achieving an RMSE of 5.6 × 10^-2. Interestingly, the integration of the Vision Transformer with the U-Net model does not confer any additional precision in this scenario, yielding a near RMSE of 6.1 × 10^-2. The FNO model exhibits a higher RMSE of 1.1 × 10^-1, indicating a somewhat lower accuracy in identifying the pumping well locations.
To visually illustrate these results, Figure <ref> presents a test sample using the U-Net + ViT model. It demonstrates an accuracy of 93% in locating the pumps, calculated across the entire test dataset. The figure visualizes the model's ability to accurately identify the positions and the pumping rate of the wells. In comparison, the FNO model achieved a notably lower detection accuracy of 79% in the same task.
§.§ Example results for time series data
This section unveils the results achieved from the analysis of time series data, starting with a simplified scenario, for which the inputs are the varying hydraulic conductivity field and the pumping rate of a single pump which varies over a 10-day simulation period. Results are evaluated in terms of root mean square error (RMSE) with a focus on the comparison of different configurations of the U-Net architecture with transformers.
Figure <ref> presents a comparison of results over 5 time frames for the U-Net with the Vision Transformer under autoregressive testing conditions.
The RMSE for each method was calculated to quantify the models' performance. The U-Net architecture alone yielded an RMSE of 1.79 × 10^-2. When supplemented with a Vision Transformer, consisting of 2 attention blocks and 2 heads, the performance improves, registering an RMSE of 1.67 × 10^-2. However, increasing the complexity of the Vision Transformer to 8 blocks and 8 heads did not further improve the performance, instead, it led to a slight degradation in the RMSE (1.77 × 10^-2). Adding an Axial Transformer <cit.> to the U-Net architecture also did not enhance the performance, yielding an RMSE of 1.83 × 10^-2.
These results suggest that while adding a Vision Transformer to the U-Net architecture leads to performance improvement, increasing the complexity of the latent space does not necessarily do so.
§ CONCLUSION
This paper explores and evaluates the capabilities of different machine learning models, with a particular focus on U-Net, U-Net integrated with Vision Transformers (ViT), and Fourier Neural Operator (FNO), in the context of predicting hydraulic head in groundwater studies.
Our analysis and testing, conducted on synthetic datasets designed to simulate the conditions from the Overbetuwe region in the Netherlands and including scenarios with sparse data, firmly establish that both U-Net and U-Net + ViT models are particularly adept at dealing with such tasks. Importantly, these models are also preferred due to their fewer requisite parameters.
Specifically, in the case of sparse observation scenarios, the vanilla U-Net and the U-Net + ViT models outperformed the FNO model. In particular the performance of the UNet+ViT model was superior when handling sparse data, highlighting the potential of the model in real-world applications, where data scarcity is a common issue. The U-Net model demonstrated optimal performance in identifying pumping wells. Interestingly, the integration of the Vision Transformer with the U-Net model did not confer any additional accuracy in this scenario. As for the analysis of time series data, supplementing the U-Net architecture with a Vision Transformer improved the model performance, recording an RMSE of 1.67 × 10^-2 compare to 1.79 × 10^-2 of the vanilla U-Net. However, increasing the complexity of the Vision Transformer did not further enhance the model performance, indicating that a more complex architecture does not necessarily yield better results.
Future research will involve applying this validated methodology to real-world data, beginning with the Overbetuwe region in the Netherlands. This will offer an opportunity to further validate and refine the model, accounting for the sparsity and uncertainties inherent in real-world data.
§ BROADER IMPACT
The implications of this research span a wide range of potential societal impacts, with a primary focus on improving the efficiency and reliability of groundwater level forecasting. Given that groundwater is a crucial resource for approximately 2.5 billion people worldwide, fulfilling their daily water needs, and a significant source of global irrigation water, the importance of reliable forecasts cannot be overstated. Our work, through enhancing the performance of groundwater numerical models, offers an opportunity to revolutionize the management and distribution of this vital resource. By providing more accurate and data-efficient predictions, we can aid in the formulation of informed and sustainable water management strategies. This is particularly crucial considering the pressing challenges of population growth and climate change.
§ ACKNOWLEDGEMENTS
This work was carried out with support of the Leeds-York-Hull Natural Environment Research Council (NERC) Doctoral Training Partnership (DTP) Panorama under grant NE/S007458/1.
Our sincere appreciation is extended to Professor Karniadakis of Brown University. The financial assistance provided by the Leeds Institute of Fluid Dynamics and Deltares, which made possible the research visit to Brown University, is also gratefully acknowledged. Lastly, we would like to express our gratitude to the reviewers. Their critiques and suggestions have greatly enhanced the overall clarity of our work.
99
brakenhoff Brakenhoff, D. A., Vonk, M. A., Collenteur, R. A., Van Baar, M., & Bakker, M. (2022). Application of Time Series Analysis to Estimate Drawdown From Multiple Well Fields. Frontiers in Earth Science, 10.
modflow Hughes, J. D., Russcher, M. J., Langevin, C. D., Morway, E. D., & McDonald, R. R. (2022). The MODFLOW Application Programming Interface for simulation control and software interoperability. Environmental Modelling & Software, 148.
gupta2022multispatiotemporalscale Gupta, J. K., & Brandstetter, J. (2022). Towards Multi-spatiotemporal-scale Generalized PDE Modeling. arXiv preprint arXiv:2209.15616.
li2020fourier Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A., & Anandkumar, A. (2020). Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895.
vito Ovadia, O., Kahana, A., Stinis, P., Turkel, E., & Karniadakis, G. E. (2023). ViTO: Vision Transformer-Operator. arXiv preprint arXiv:2303.08891.
transunet Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., ... & Zhou, Y. (2021). TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. arXiv preprint arXiv:2102.04306.
WEN2022104180 Wen, G., Li, Z., Azizzadenesheli, K., Anandkumar, A., & Benson, S. M. (2022). U-FNO—An enhanced Fourier neural operator-based deep-learning model for multiphase flow. Advances in Water Resources, 163.
DINO_loket DINO loket. (2023). Retrieved from https://www.dinoloket.nl/en/subsurface-data
li2023transformer Li, Z., Meidani, K., & Farimani, A. B. (2023). Transformer for Partial Differential Equations' Operator Learning. arXiv preprint arXiv:2205.13671.
cao2021choose Cao, S. (2021). Choose a Transformer: Fourier or Galerkin. arXiv preprint arXiv:2105.14995.
dosovitskiy2021image Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., ... & Houlsby, N. (2021). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv preprint arXiv:2010.11929.
ronneberger2015unet Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv preprint arXiv:1505.04597.
wen2022ufno Wen, G., Li, Z., Azizzadenesheli, K., Anandkumar, A., & Benson, S. M. (2022). U-FNO – An enhanced Fourier neural operator-based deep-learning model for multiphase flow. arXiv preprint arXiv:2109.03697.
francestudy Mellouli, N., Rabah, M. L., & Farah, I. R. (2022). Transformers-based time series forecasting for piezometric level prediction. In 2022 IEEE International Conference on Evolving and Adaptive Intelligent Systems (EAIS).
Wunsch_comparison Wunsch, A., Liesch, T., & Broda, S. (2021). Groundwater level forecasting with artificial neural networks: a comparison of long short-term memory (LSTM), convolutional neural networks (CNNs), and non-linear autoregressive networks with exogenous input (NARX). Hydrology and Earth System Sciences, 25(3), 1671-1687.
jiang2023fouriermionet Jiang, Z., Zhu, M., Li, D., Li, Q., Yuan, Y. O., & Lu, L. (2023). Fourier-MIONet: Fourier-enhanced multiple-input neural operators for multiphase modeling of geological carbon sequestration. arXiv preprint arXiv:2303.04778.
ho2019axial Ho, J., Kalchbrenner, N., Weissenborn, D., & Salimans, T. (2019). Axial Attention in Multidimensional Transformers. arXiv preprint arXiv:1912.12180.
seidman2022nomad Seidman, J. H., Kissas, G., Perdikaris, P., & Pappas, G. J. (2022). NOMAD: Nonlinear Manifold Decoders for Operator Learning. arXiv preprint arXiv:2206.03551.
deeponet Lu, L., Jin, P., Pang, G., Zhang, Z., & Karniadakis, G. E. (2021). Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature Machine Intelligence, 3(3), 218-229.
vaswani2017attention Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention Is All You Need. arXiv preprint arXiv:1706.03762.
§ APPENDIX A
This appendix provides detailed diagrams of the model structures.
§ APPENDIX B
This appendix sets out to examine whether simpler machine learning models, specifically a fully connected neural network, a linear regression model, and a Random Forest model, can achieve the same level of accuracy as more advanced models like the U-Net, the UNet+ViT models, and FNO in predicting groundwater levels.
The particular Random Forest model tested here used 30 estimators. The fully connected neural network, employed for this comparison, comprises three hidden layers, each containing 1000 nodes and using ReLU activation functions. The model holds an impressive count of 51.17 million trainable parameters.
Unfortunately, none of the models was able to predict accurately the groundwater levels neither capturing the location of the wells. Specifically, the fully connected neural network and the linear regression model yielded high RMSEs of 1.17 × 10^-1 and 1.24 × 10^-1, respectively. The Random Forest model fared slightly better, achieving a lower RMSE of 1.02 × 10^-1, but it still fell short of the U-Net, the UNet+ViT models, and FNO.
Figure <ref> visually contrasts the predictions of these simpler models gainst the ground truth. Their significant underperformance becomes evident when compared to more sophisticated models. For a comparison of these results with accurate outcomes produced by the UNet+ViT model, the reader is directed to Figure <ref>.
|
http://arxiv.org/abs/2307.04909v1 | 20230710212643 | Planar Curve Registration using Bayesian Inversion | [
"Andreas Bock",
"Colin J. Cotter",
"Robert C. Kirby"
] | cs.CV | [
"cs.CV",
"cs.NA",
"math.NA"
] |
dtu]Andreas Bockcor1
[email protected]
[dtu]organization=Department of Applied Mathematics and Computer Science, Technical University of Denmark,
addressline=Richard Petersens Plads, Building 324,
city=Kongens Lyngby,
postcode=2800,
country=Denmark
[cor1]Corresponding author
imperial]Colin J. Cotter
[email protected]
[imperial]organization=Department of Mathematics, Imperial College London,
addressline=180 Queen's Gate, South Kensington,
city=London,
postcode=SW72RH,
country=United Kingdom
baylor]Robert C. Kirby
[email protected]
[baylor]organization=Department of Mathematics, Baylor University,
addressline=1410 S.4th Street, Sid Richardson Science Building,
city=Waco,
postcode=76706,
state=Texas,
country=United States of America
We study parameterisation-independent closed planar curve matching
as a Bayesian inverse problem. The motion of the curve is modelled
via a curve on the diffeomorphism group acting on the ambient space,
leading to a large deformation diffeomorphic metric mapping
(LDDMM) functional penalising the kinetic energy of the deformation.
We solve Hamilton's equations for the curve matching problem using
the Wu-Xu element [S. Wu, J. Xu, Nonconforming finite element spaces
for 2m^th order partial differential equations on ℝ^n simplicial
grids when m = n + 1, Mathematics of Computation 88 (316) (2019)
531–551] which provides
mesh-independent Lipschitz constants for the forward motion of the
curve, and solve the inverse problem for the momentum using Bayesian
inversion. Since this element is not affine-equivalent
we provide a pullback theory which expedites the implementation
and efficiency of the forward map. We adopt ensemble Kalman inversion using
a negative Sobolev norm mismatch penalty to measure the discrepancy
between the target and the ensemble mean shape. We provide
several numerical examples to validate the approach.
Closed curve matching Nonconforming finite element method Bayesian inverse problem
87.57.N
65M60
65P10
65M32
§ INTRODUCTION
Closed curve matching is a central problem in shape analysis where the goal is
to bring into alignment two closed curves in called the template
and the target <cit.>. For unparameterised curves,
the shape space for these objects is
Q = ∖ <cit.>.
This quotient space disassociates the curve from arbitrary reparameterisation
since they do not affect the range of the curves in question.
This gives rise to studying the commuting left and right actions of two Lie groups,
G=Diff_+(ℝ^2) and H=Diff_+(S^1) as in <cit.>:
GQ = Emb(S^1, G.ℝ^2), HQ = Emb(H.S^1,ℝ^2).
In the context of developing algorithms for planar curve matching, these group
actions must be explicitly discretised. In this paper we our shape space with the
so-called outer metric inherited by G which acts on the ambient space.
This is in contrast to inner metrics intrinsically defined on the embedded shape
<cit.>, see <cit.> for a comparison. To treat
the parameterisation, one can parameterise elements of H using its Lie
algebra and exploit its vector space structure. In this paper we consider a mismatch
penalty that eliminates the need to treat H explicitly. Instead we note that two
closed curves c_1 and c_2 are similar when the difference between the indicator
function 1 evaluated on their interiors is small. For some linear differential
operator 𝒞 we therefore we define the mismatch, or misfit, between
them as:
𝔈(c_1, c_2) = 1_c_1 - 1_c_2_𝒞^2,
where f _𝒞^2 = ⟨𝒞^-1f,𝒞^-1f⟩_L^2
over some computational domain described later. For the outer metric we take the LDDMM
approach <cit.> and consider a one-parameter family of velocities
t↦ u_t encoding the motion of the ambient space (and therefore the
shape) which simultaneously provides a distance measure.
We discretise the velocity field using finite elements, specifically the Wu-Xu element
<cit.>. This element provides a nonconforming discretisation for
sixth order operators; sixth order is necessary for the diffeomorphism to be sufficiently
smooth for the computations that we undertake. The implementation of this element in
Firedrake <cit.> is made possible by applying the theory of
<cit.> and techniques for code generation in <cit.>. Given certain assumptions on the structure of our
problem we can identify this entire family of velocities with a single
initial momentum defined as a function over the template. We eliminate its
evolution equation by using the analytical solution, and restrict the initial
conditions to only generate geodesics in the space of unparameterised curves.
This results in a forward map, taking as input the momentum and
providing the diffeomorphism whose action maps the template to
the target curve. After obtaining a finite element discretisation
of this map we apply massively parallel and derivative-free ensemble Kalman
inversion which we use to invert the forward map for the initial momentum
determining the geodesic motion of the curve.
§.§ Previous work
Diffeomorphic registration has enjoyed a rich literature since the
seminal works <cit.>.
For curves specifically, <cit.>
present the first algorithms for modelling curve matching via gradient
descent methods. <cit.> represents curves as measures onto
which a Hilbert structure is endowed, and computations of both the outer
metric and the curves are done via radial reproducing kernels producing
C^∞ velocities. In particular, curves were represented as geometric
currents.
<cit.> studies such a varifold-based loss function
for elastic metrics, see also <cit.>
for numerical frameworks for H^2 metrics. <cit.>
contains a review of methods related to elastic curves.
In this paper we are concerned with higher-order metrics using
finite elements. While there is typically a loss of regularity incurred
by these methods, they offer more computationally efficient methods than
e.g. kernel methods. Finite elements also benefit from spatial
adaptivity allowing for local refinement e.g. close to embedded curves.
Closest to our approach in terms of discretisation are
<cit.> where a
particle-mesh method is employed for curve matching where the
curve was discretised into a finite set of particles, acted on by an
outer metric. However, we consider instead an outer metric finite element
discretisation (as opposed to the intrinsic metric in
<cit.>). <cit.> presents an adaptive
Eulerian FEM discretisation of the velocity field for LDDMM using C^1
cubic Hermite elements and compares the deformations generated using
C^∞ fields to assess the effect of the loss of regularity.
Smooth mesh deformations are also of interest in shape optimisation
where the aim is to transform a mesh such that some functional is minimised.
Finite element methods are also adopted here, with deformation fields being
discretised using B-splines <cit.>, harmonic polynomials
or Lagrange finite elements depending the desired resolution or order
<cit.>. Using the finite element space introduced
in <cit.> we can guarantee that the Lipschitz
norm remains bounded under mesh refinement without resorting to spline
or kernel discretisations. As mentioned, we use Firedrake
<cit.> for all our numerical experiments, see
also <cit.> for an extension of this package for
shape optimisation.
Our formulation eliminates the need to integrate the momentum equation
via its analytical solution thereby improving on the typically larger cost
of Hamiltonian shooting based methods <cit.>
compared to an LDDMM formulation <cit.>. We
only need to solve an elliptic equation to obtain the velocity
and use a simple variational Euler scheme to
evolve the diffeomorphism. Traditional approaches in numerical shape
analysis often apply a shooting procedures to determine the initial
momentum transporting the image or landmarks to the desiderata, see
e.g. <cit.>. Bayesian approaches
have been employed before in the context of shape analysis, see e.g.
<cit.> where function space Markov Chain Monte Carlo
is used to characterise the posterior density of momenta generating a
given shape. Similar to our approach is <cit.>
in which ensemble Kalman inversion
<cit.> is applied to
recover the momentum for landmark matching.
§.§ Organisation
Section <ref> contains an introduction to
diffeomorphic curve matching and the associated Hamiltonian systems,
We also discuss the application of the finite element approach
using the Wu-Xu element from <cit.> and the discretisation
of the velocity equation. Section <ref> contains
the transformation theory for the Wu-Xu element, and Section
<ref> contains details of the discretisation of the
Hamiltonian equations. Next, Section <ref> discusses the
Bayesian inverse problem, and Section <ref> contains
numerical results. Section <ref> contains a summary.
§ DIFFEOMORPHIC REGISTRATION
Let Ω be a
connected convex subset of , d=2, with polygonal boundary ∂Ω.
We study maps q∈ Q=H^1(S^1,) from a template curve Γ_0∈ to
a target curve Γ_1∈ whose motion is restricted by the differential
equation:
q̇_t = u_t∘ q_t ,
where u_t, t ∈ [0,1] is a family of time-dependent vector fields on
Ω with some prescribed spatial smoothness. A geodesic path between
two such parameterised curves Γ_0 and Γ_1 is defined as a path
minimising the associated kinetic energy in u:
1/2∫_0^1u_t^2 t,
where · dominates the Lipschitz norm.
In fact, since u_t is supported on Ω it generates a curve on
<cit.> of the entire ambient space via:
φ̇_t = u_t∘φ_t, φ_0 = ,
whose motion restricted to the curve q_0 ∘ S^1 equals the q_t ∘ S^1
at time t ∈ [0,1]. As the kinetic energy measures distances between two
elements of via velocity defined over the entire field Ω, we refer
to this associated distance measure as an outer metric on the shape space
.
§.§ Hamiltonian system
Here we take a Hamiltonian approach <cit.> and introduce
the momentum p_t∈ T^*Q occupying the linear cotangent space, which we assume has enough
regularity so that it has a Fréchet-Riesz representer in L^2(S^1) (also
denoted p_t, with some abuse of notation). We extremise the following the
functional:
S = ∫_0^1 1/2u_t^2 + ⟨ p_t, q̇_t- u_t∘ q_t⟩ t,
where ⟨ h, g ⟩= ∫_S^1 h· g. Taking variations i.e.
δ S = 0 leads to Hamilton's equations for curve matching for
t∈ [0,1]:
∫_0^1⟨δ p, q̇_t - u_t∘ q_t⟩ t = 0,
∀δ p∈ L^2(S^1),
∫_0^1⟨ṗ_t - ∇ u_t∘ q_t p_t, δ q⟩ t = 0,
∀δ q ∈ Q,
δ u_t^2/δ u - ⟨ p_t, δ u∘ q_t⟩ = 0.
where δ p, δ u and δ q are space-time test functions.
The following theorem shows that we can solve (<ref>)
analytically:
The solution p_t to (<ref>) is at all times t≥ 0 given by p_t = ∇φ_t∘ q_0 p_0.
See <ref>.
To generate parameterisation-independent geodesics as in
<cit.> we replace the initial condition q_0 by
q_0∘η, where η∈Diff_+(S^1) in the case of planar curves
is an arbitrary reparameterisation. As a result of this quotient representation
∖Diff_+(S^1) of curves we minimise over all η leading to the
horizontality condition on the momentum. This means that the
momentum p_0 has no tangential component and can therefore
be described by a one-dimensional signal, p̃_0: S^1↦ℝ:
p_0 = 𝐧_q_0p̃_0
where 𝐧_q_0:S^1→ℝ^2 is the outward normal
of the template. Thus, along with Theorem <ref> we have the
following characterisation,
p_t = φ_t∘ q_0𝐧_q_0p̃_0.
This generates trajectories of geodesics between unparameterised
curves. The entire geodesic motion of the curve can therefore be determined
by a one-dimensional signal along the initial curve q_0.
To summarise this section we are concerned with integration of the
following reduced Hamiltonian system for t∈ [0,1]:
δ u_t^2/δ u = ⟨φ_t∘ q_0𝐧_q_0p̃_0, δ u∘ q_t⟩,
q̇_t = u_t∘ q_t,
with q_0 and p̃_0 fixed and boundary conditions u_t|_∂Ω=0
for all t∈ [0,1]. Next we discuss a discretisation of (<ref>).
§.§ Outer metric via finite elements
From Picard-Lindelhöf analysis it is clear that the Banach space
ordinary differential equation (ODE) (<ref>) require
a pointwise Lipschitz condition on u_t. As such, u_t must occupy at least
when q_0 ∈ L^∞(S^1), see <cit.> (see also Corollary 7 in this reference for other host
spaces). Dupuis <cit.> establishes
sufficient conditions accomplishing the same in a Hilbertian setting. The
Hilbertian setting is better suited to finite element methods.
This is in contrast with which is only a Banach space and, to the
best of the authors' ability, is not easy to approximate
numerically[<cit.> approximates by means of a fixed
point linearisation solutions to the nonlinear ∞-harmonic equation
<cit.>.]. We therefore request a norm ·
such a way that a solution to (<ref>) ensures that
this condition is met, which in turn implies global existence and uniqueness of
(<ref>) by the references above. For d=2, 3,
H_0^3(Ω) is contained in
𝖢^1(Ω̅) and so is Lipschitz on the interior <cit.>. As such, we want to describe a discretisation of
(<ref>) ensuring a type of H^3 regularity as the
follow theorem shows.
Let O be a convex bounded Lipschitz domain in ℝ^d with polygonal
boundary and O_h a shape-regular, quasi-uniform triangulation thereof
<cit.> for some mesh size h>0. Suppose further that
u is continuous on O̅, u|_K ∈ H^3(K)^d for K∈ O_h and that there exists an operator B
inducing the norm u_B^2 = ∑_K∈ O_hu_B(K)^2, where we define
u_B(K)^2 = ∫_K Bu· u x such that u_H^3(K)^d≲u_B(K). Then u∈ W^1,∞(O)^d.
The embedding theorem for homogeneous Sobolev spaces (i.e. with zero traces)
into the space 𝖢^j(O̅) are well-known. However, since the trace
γ_K u of u on ∂ K, K∈ O_h may not be zero. By <cit.>, H^3(K) ↪𝖢_B^1(K), where:
𝖢_B^1(K) = { u ∈𝖢^1(K) | D^ u is
bounded on K, ||≤ 1}.
This means any H^3(K) function has a continuous representative with almost
everywhere bounded first derivatives on K. Since
u∈𝖢^0(O̅), u is a continuous function with its first
derivative a.e. bounded, implying a Lipschitz condition. To summarise:
u _W^1,∞(K)^d^2 ≲ u _H^3(K)^d^2 ≲ u _B(K)^2
Summing over the elements K∈Ω and squaring:
u _W^1,∞(O)^d^2≲ u _B^2.
where we have used that u is a continuous function with essentially bounded
gradient.
In light of this theorem we approximate the space of velocity fields by a
nonconforming finite element space (see e.g. <cit.>)
This way we can guarantee the necessary Lipschitz properties of our functions
without having to impose higher-order global continuity of the finite-dimensional solution spaces.
In Section <ref> we use the H^3-nonconforming finite element
space presented in <cit.> in a discretisation
of (<ref>). We choose the operator B=( - αΔ)^2m
for a given positive constant α leading to the following bilinear form:
a_Ω(u, v) = ∑_i=1^d∫_Ω∑_j=0^m α^j mj D^j u^i·
D^j v^i x = ∫_Ω Bu· v x,
where x· y is the Euclidean inner product, D^0 =, and
D^j =∇ D^j-1 j is odd,
∇· D^j-1 j is even.
§ A PULLBACK THEORY FOR THE WU-XU ELEMENT
The Wu-Xu element provides an opportunity to tackle this problem in a (nonconforming) H^3 setting, but it presents challenges for implementation. Although we can construct its basis on a reference element, say, using the FIAT package <cit.>, the Wu-Xu elements do not form an affine equivalent family <cit.> under pullback.
Consequently, we apply the theory developed in <cit.>, which gives a generalization of techniques developed for the C^1 conforming Argyris element <cit.>.
To fix ideas, put a reference triangle K with vertices by {_i }_i=1^3.
For any nondegenerate triangle K with vertices {_i }_i=1^3, we
let F:T →K denote the affine mapping sending
each _i to the corresponding _i and J_T its
Jacobian matrix.
We adopt the ordering convention used in <cit.>, where edge
e_i of any triangle connects the vertices other than
i. We take the unit tangent _i = [ t_i^x t_i^y ]^T to from the vertex of lower number to the
higher one. The normal to edge i is defined by counterclockwise
rotation of the tangent, so that _i = R _i, where
R = [ [ 0 1; -1 0 ]].
The normals,
tangents, and edge midpoints for the reference element K
will include hats: _i, _i, and
_i. The pull-back of any function f defined
on K is given by
F^*(f) = f∘ F,
and the push-forward of functionals n acting on functions defined over
K is
F_*(n) = n ∘ F^*,
so that
F_*(n)(f) = (n ∘ F^*)(f) = n(f∘ F)
Finite element implementation requires local shape functions {ψ_i^K }_i=1^N that are restrictions of the global basis to cell K.
These are taken dual to a set of nodes or degrees of freedom { n_i^K }_i=1^N in the sense that
n_i^K(ψ_j^K) = δ_ij.
In practice, one typically computes the basis {ψ̂_i }_i=1^N dual to some nodes {n̂_i}_i=1^N over the reference element K̂. For affine equivalent families (like the Lagrange basis), the physical basis functions are the pullbacks of reference element shape functions, so that
ψ_i^K = F^*(ψ_i).
Equivalently, the nodes are preserved under push-forward, with
F_*(n^K_i) = n̂_i.
We may express these relations in a kind of vector-notation.
If Ψ̂ is a vector whose entries are ψ_i, then in the affine equivalent case, F^*(Ψ̂) contains the basis on cell K, and also F_*(𝒩) = 𝒩.
For non-equivalent families, these relations fail, but we can hope to construct a matrix M such that
Ψ = M F^*(Ψ̂)
contains the correct vector of basis functions on T.
The matrix M will depend on the particular geometry of each cell, but if it is sparse this amounts to a considerable savings over directly constructing the basis on each triangle.
Our theory in <cit.> proceeds by transforming the actions of the functionals on the finite element space.
The finite element functionals are defined on some infinite-dimensional space (e.g. twice-continuously differentiable functions), and we let π denote the restriction of functionals to the finite-element space and π̂ the corresponding restriction on the reference element. Then, we look for a matrix V such that
V F_*(π𝒩) = π𝒩,
and can prove <cit.> that
M = V^T.
For any triangle K and integer k≥ 0, we let ^k(K) denote the
space of polynomials of degree no greater than k over K. Letting
λ_i be the barycentric coordinates for K (equivalently, the
Lagrange basis for ^1(K)), we let
b_K = λ_1 λ_2 λ_3 be the standard cubic bubble
function over K. We also need notation for the linear functionals defining
degrees of freedom. We let δ_ denote pointwise evaluation of some
(continuous) function:
δ_(p) = p().
We let δ_^ denote the derivative in some
direction at a point :
δ^_(p) = ^T ∇ p()
Repeated superscripts will indicate higher derivatives.
We use block notation will for gradients and sets of
second-order derivatives, such as
∇_ = [ δ_^ δ_^ ]^T
for the gradient in Cartesian coordinates at a point , and
△_ =
[ δ_^ δ_^ δ_^ ]^T
for the unique components of the Hessian matrix. We will use
superscripts in the block notation to indicate the derivatives taken
in other directions than the Cartesian ones, such as
∇^ containing the derivatives with respect to a
normal vector and tangent vector for some part of the
boundary. Similarly, △^ will contain the
second partials in each direction and the mixed partial in both directions.
The Wu-Xu elements also utilise integral moments of normal derivatives, and we shall also need averages tangential and mixed derivatives over edges to perform the transformations. Given any directional vector , we define the moment of the derivative in the direction over edge by:
μ^_(f) = ∫_·∇ f ds,
Similarly, we let μ^_1_2_ to denote the functionals computing moments of second (possibly mixed) directional derivatives over an edge. Now, we define the pair of H^3 nonconforming triangles considered in <cit.>.
Note that there are two spaces given: a space compatible with sixth-order problems,
and a robust space that is stable for second, fourth and sixth-order problems.
We define function space (K) over some triangle K by
(K) = ^3 + b_K ^1,
and the function space for the robust element will be
(K) = ^3 + b_K ^1 + b_K^2 ^1,
where ^k is the standard space of polynomials of degree k.
Note that we have (K)= 12 and (K) = 15 since b_K ∈^3 ∩ b_K ^1.
The degrees of freedom for the two elements are quite similar.
We can parametrise (K) by
𝒩 =
[ δ__1 ∇^T__1 δ__2 ∇^T__2 δ__3 ∇^T__3 μ^_1_1__1 μ^_2_2__2 μ^_3_3__3 ]^T.
That is, the degrees of freedom consist of point values and gradients at each vertex, together with moments of the second normal derivative along edges. For the robust element, we also use the moments of the first normal derivatives, so that
𝒩_r =
[ δ__1 ∇^T__1 δ__2 ∇^T__2 δ__3 ∇^T__3 μ^_1__1 μ^_2__2 μ^_3__3 μ^_1_1__1 μ^_2_2__2 μ^_3_3__3 ]^T.
Wu and Xu actually define the degrees of freedom as average of these moments over the relevant facets, although this does not affect unisolvence or other essential properties.
For the reference element, it will be helpful to use their original definition.
For some edge of K̂, define
μ̂^_(f) = 1||∫_·∇̂ f dŝ,
and similarly define moments second directional derivatives over reference element edges.
The reference element nodes for (K̂) will be taken as
𝒩 =
[ δ__1 ∇̂^T__1 δ__2 ∇̂^T__2 δ__3 ∇̂^T__3 μ̂^_1_1__1 μ̂^_2_2__2 μ̂^_3_3__3 ]^T,
and for (K̂) we will use
𝒩 =
[ δ__1 ∇̂^T__1 δ__2 ∇̂^T__2 δ__3 ∇̂^T__3 μ̂^_1__1 μ̂^_2__2 μ̂^_3__3 μ̂^_1_1__1 μ̂^_2_2__2 μ̂^_3_3__3 ]^T
Note that this redefinition has no effect in the case of an equilateral reference triangle with unit edge length.
For the more common case of a right isosceles reference triangle, however, this will eliminate the need for logic indicating to which reference element edges the edges of each triangle correspond.
The derivative degrees of freedom in both Wu-Xu elements are not preserved under push-forward, and since we have only normal derivatives on the edges, we cannot immediately obtain the correct nodes by taking linear combinations.
Consequently, we must develop a compatible nodal completion <cit.>.
For the Wu-Xu elements, this contains all the original degrees of freedom plus the integrals of tangential and mixed normal/tangential derivatives.
Such a completion is shown for the standard Wu-Xu element in Figure <ref>. A completion for the robust element includes the first normal moments and tangential moments as well, as showin in Figure <ref>.
We define
ℳ_1,i =
[ μ^_i__i μ^_i__i ]^T
to be the vector of the moments of the normal and tangential
derivatives on a particular edge.
We also let ℳ_1, i contain the corresponding reference element nodes.
We only need ℳ_1,i and ℳ_1, i for the robust element. Both elements require
ℳ_2,i =
[ μ^_i_i__i μ^_i_i__i μ^_i_i__i ]^T
containing the unique second derivative moments on each edge.
We similarly define ℳ_2, i to contain the reference element integral averages.
The compatible nodal completion for (K, (K), 𝒩) is
𝒩^C =
[ δ__1 ∇^T__1 δ__2 ∇^T__2 δ__3 ∇^T__3 ℳ_2,1^T ℳ_2,2^T ℳ_2,3^T ]^T,
with the hatted equivalents comprising 𝒩̂^C on the reference cell.
The completed set of nodes for the robust element is
𝒩_r^C =
[ δ__1 ∇^T__1 δ__2 ∇^T__2 δ__3 ∇^T__3 ℳ_1,1^T ℳ_1,2^T ℳ_1,3^T ℳ_2,1^T ℳ_2,2^T ℳ_2,3^T ]^T,
Now, the matrix V from (<ref>) will be obtained in factored form
V = E V^c D,
where each matrix plays a particular role. D is a rectangular matrix expressing the completed nodes in terms of the given physical nodes. V^c is a block diagonal matrix relating the push-forward of the reference nodal completion to the physical nodal completion, and E is a Boolean matrix selecting actual finite element nodes from the completion. For the Wu-Xu element, D is 18 × 12, V^c is 18 × 18, and E is 12 × 18. For the robust element, D is 24 × 15, V^c is 24 × 24, and E is 15 × 24.
Now, we define the matrix D, which expresses the members of 𝒩^C as linear combinations of the members of 𝒩.
Clearly, the rows corresponding to members of 𝒩^C also appearing in 𝒩
will just have a single nonzero in the appropriate column.
For the Wu-Xu element, the remaining nodes are all integrals of quantities over
edges, and we can use the Fundamental Theorem of Calculus to perform this task.
Let be an edge running from vertex _a to _b with unit tangent and normal and ,
respectively. We have
μ^_(f) = ∫_^T ∇ f ds
= f(_b)-f(_a) = δ__b(f) - δ__a(f).
In a similar way, the moments of the second tangential and mixed
derivatives on can be expressed as differences between
components of the gradients at endpoints by:
μ^_(f) = ^T (∇__b f -
∇__a f),
μ^_(f) = ^T (∇__b f -
∇__a f),
and we have that 𝒩^C = D 𝒩, or
[ δ__1; δ^__1; δ^__1; δ__2; δ^__2; δ^__2; δ__3; δ^__3; δ^__3; μ^_1_1__1; μ^_1_1__1; μ^_1_1__1; μ^_2_2__2; μ^_2_2__2; μ^_2_2__2; μ^_3_3__3; μ^_3_3__3; μ^_3_3__3 ]
=
[ 1 0 0 0 0 0 0 0 0 0 0 0; 0 1 0 0 0 0 0 0 0 0 0 0; 0 0 1 0 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0 0 0 0; 0 0 0 0 1 0 0 0 0 0 0 0; 0 0 0 0 0 1 0 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 0 1 0 0 0 0; 0 0 0 0 0 0 0 0 1 0 0 0; 0 0 0 0 0 0 0 0 0 1 0 0; 0 0 0 0 -n_1,x -n_1,y 0 n_1,x n_1,y 0 0 0; 0 0 0 0 -t_1,x -t_1,y 0 t_1,x t_1,y 0 0 0; 0 0 0 0 0 0 0 0 0 0 1 0; 0 -n_2,x -n_2,y 0 0 0 0 n_2,x n_2,y 0 0 0; 0 -t_2,x -t_2,y 0 0 0 0 t_2,x t_2,y 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 1; 0 -n_3,x -n_3,y 0 n_3,x n_3,y 0 0 0 0 0 0; 0 -t_3,x -t_3,y 0 t_3,x t_3,y 0 0 0 0 0 0; ][ δ__1; δ^__1; δ^__1; δ__2; δ^__2; δ^__2; δ__3; δ^__3; δ^__3; μ^_1_1__1; μ^_2_2__2; μ^_3_3__3; ].
The matrix V^C is obtained by relating the push-forwards of the nodal completion to their reference counterparts.
We can convert between the Cartesian and other orthogonal coordinate
systems (e.g. normal/tangential) representations as follows. Given a
pair of orthogonal unit vectors and , we can define an
orthogonal matrix G by:
G = [ ]^T.
In particular, we will use G_i to have the normal and tangential
vectors to edge i of triangle K and G_i those for
triangle K. The multivariate chain rule readily shows that
∇_x = G^T ∇^_.
Similarly, letting = [ n_x n_y ]^T and
= [ t_x t_y ]^T, we define the
matrix Γ by
Γ = [ n_x^2 2 n_x t_x t_x^2; n_x n_y n_x t_y + n_y t_x t_x t_y; n_y^2 2 n_y t_y t_y^2 ],
and the chain rule gives
△_x = Γ△_x^.
Although G is an orthogonal matrix, Γ is not.
A similar calculation also shows gives that:
△^_ = Γ^-1△_,
where
Γ^-1 = [ n_x^2 2 n_x n_y n_y^2; n_x t_x n_x t_y + n_y t_x n_y t_y; t_x^2 2 t_x t_y t_y^2 ].
We will also need to transform derivatives under pull-back. Using the
chain rule,
∇ (ψ̂∘ F) = J^T ∇̂ψ̂∘ F.
Combining this with (<ref>) lets us relate the normal and
tangential derivatives in physical space to the normal and tangential
derivatives in reference space.
∇_^ = G J^T G^T ∇̂^_.
We can perform a similar calculation for second derivatives. With
the entries of the Jacobian matrix as:
J = [ ∂ x∂x̂ ∂ x∂ŷ; ∂ y∂x̂ ∂ y∂ŷ ],
we define the matrix
Θ
= [ ( ∂x̂∂ x)^2 2 ∂x̂∂ x∂ŷ∂ x ( ∂ŷ∂ x)^2; ∂x̂∂ y∂x̂∂ x ∂x̂∂ y∂ŷ∂ x
+ ∂x̂∂ x∂ŷ∂ y ∂ŷ∂ x∂ŷ∂ y; (∂x̂∂ y)^2 2 ∂x̂∂ y∂ŷ∂ y ( ∂ŷ∂ y)^2 ],
so that for = F(),
△_ = Θ△̂_.
The inverse of Θ follows by reversing
the roles of reference and physical variables:
Θ^-1
= [ ( ∂ x∂x̂)^2 2 ∂ x∂x̂∂ y∂x̂ ( ∂ y∂x̂)^2; ∂ x∂ŷ∂ x∂x̂ ∂x∂ŷ∂ y∂x̂
+ ∂ x∂x̂∂ y∂ŷ ∂ y∂x̂∂
y∂ŷ; (∂ x∂ŷ)^2 2 ∂ x∂ŷ∂
y∂ŷ ( ∂ y∂ŷ)^2 ]
We can also relate the second-order derivatives in normal/tangential
coordinates under pullback by
△^_ = ΓΘΓ̂^-1△_^.
From here, we will let G_i and Ĝ_i denote the matrices
containing normal and tangent vectors to edge _i of a generic
triangle T and the reference triangle T̂, respectively, with
similar convention for the other geometric quantities
Γ and Θ. For any vector ,
edge , and smooth function f = f ∘ F, we have
∫_^T ∇ f ds =
∫_^T ∇̂f ∘ F ds
= ∫_^T ∇̂f J_, dŝ,
where the Jacobian J_, is just the ratio of the
length of to that of the corresponding reference element edge
. Applying this to the normal and tangential moments and
using (<ref>), we have that:
ℳ_1,i
= |_i| G_i J^T Ĝ_i^-1ℳ_1, i,
where the factor of |_i| in the denominator of the
Jacobian is merged with the reference element moments to produce
ℳ_1,i. Hence, the slight modification of
reference element nodes avoids extra data structures or logic in
identifying reference element edge numbers. Then, we can use (<ref>)
to express each ℳ_2, i in terms of the reference element nodes
ℳ_2,i = || Γ_i ΘΓ̂_i^-1ℳ_2,i.
We define vectors
B^1,i = 1|_i|Ĝ_i J^-T G_i^T, B^2,i = 1|_i|Γ̂_i Θ^-1Γ_i^-1,
and hence V^C is the block-diagonal matrix
V^C = [ 1 ; J^-T ; 1 ; J^-T ; 1 ; J^-T ; B^2,1 ; B^2,2 ; B^2,3 ],
with zeros of the appropriate shapes in the off-diagonal blocks.
The extraction matrix E is just the 12 × 18 Boolean matrix selecting the members of N from N^C:
E =
[ 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0; ]
Multiplying EV^CD out and defining
β^i,x = n^i_x B^2,i_12 + t^i_x B^2,i_13,
β^i,y = n^i_y B^2,i_12 + t^i_y B^2,i_13,
we obtain for V
V =
[ 1 0 0 0 0 0 0 0 0 0 0 0; 0 ∂ x∂x̂ ∂ y∂x̂ 0 0 0 0 0 0 0 0 0; 0 ∂ x∂ŷ ∂ y∂ŷ 0 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0 0 0 0; 0 0 0 0 ∂ x∂x̂ ∂ y∂x̂ 0 0 0 0 0 0; 0 0 0 0 ∂ x∂ŷ ∂ y∂ŷ 0 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 0 ∂ x∂x̂ ∂ y∂x̂ 0 0 0; 0 0 0 0 0 0 0 ∂ x∂ŷ ∂ y∂ŷ 0 0 0; 0 0 0 0 -β^1,x -β^1,y 0 β^1, x β^1,y B^2,1_11 0 0; 0 -β^2, x -β^2,y 0 0 0 0 β^2,x β^2,y 0 B^2,2_11 0; 0 -β^3,x -β^3,y 0 β^3,x β^3,y 0 0 0 0 0 B^2,3_11; ].
The same considerations lead to a similar derivation of E, V^c, and D for the robust element, resulting in
V = [ 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 ∂ x∂x̂ ∂ y∂x̂ 0 0 0 0 0 0 0 0 0 0 0 0; 0 ∂ x∂ŷ ∂ y∂ŷ 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 ∂ x∂x̂ ∂ y∂x̂ 0 0 0 0 0 0 0 0 0; 0 0 0 0 ∂ x∂ŷ ∂ y∂ŷ 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 ∂ x∂x̂ ∂ y∂x̂ 0 0 0 0 0 0; 0 0 0 0 0 0 0 ∂ x∂ŷ ∂ y∂ŷ 0 0 0 0 0 0; 0 0 0 -B^1,1_12 0 0 B^1,1_12 0 0 B^1,1_11 0 0 0 0 0; -B^1,2_12 0 0 0 0 0 B^1,2_12 0 0 0 B^1,2_11 0 0 0 0; -B^1,3_12 0 0 B^1,3_12 0 0 0 0 0 0 0 B^1,3_11 0 0 0; 0 0 0 0 -β^1,x -β^1,y 0 β^1,x β^1,y 0 0 0 B^2,1_11 0 0; 0 -β^2,x -β^2,y 0 0 0 0 β^2,x β^2,y 0 0 0 0 B^2,2_11 0; 0 -β^3,x -β^3,y 0 β^3,x β^3,y 0 0 0 0 0 0 0 0 B^2,3_11; ]
for V, where β is as defined in (<ref>).
§ DISCRETISATION
We now describe the discretisations of the Hamiltonian system (<ref>)
using a function space introduced in the previous section.
Smooth solutions to (<ref>) generates the following curve of
diffeomorphisms:
φ̇_t = u_t ∘φ_t, φ_0=,
where the domain of φ_t is Ω_0. This subsumes the left action on the curve q_0 in
(<ref>). Our approach is therefore to solve
(<ref>) for an outer metric in tandem
with integrating the diffeomorphism defined over the entire domain
and moving the mesh, thereby automatically providing a solution to a
discrete analogue of (<ref>). We denote by
𝒯_0 denote a shape-regular, quasi-uniform triangulation
of the template domain Ω_0. Let denote the mesh
skeleton of 𝒯_0 and the subset of
whose elements do not intersect ∂Ω_0. We place the following
assumption on the initial triangulation 𝒯_0:
𝒯_0 is constructed such that the range of q_0 is described by a subset of .
Using the definition in (<ref>) we define the vector-valued
Wu-Xu space defined over Ω_0:
V(Ω_0) = { v ∈ L^2(Ω)^2 | v_i|_K ∈(K), K ∈𝒯_0, i=1,2}.
Further, let 0=t_0, t_Δ T, … t_T-1=1 denote T uniformly distributed
points and use an Euler discretisation of the time derivate in (<ref>),
where we let φ_t_k∈ V(Ω_0), u_t_k∈ V(Ω_0):
φ_t_k+1 = φ_t_k + u_t_k∘φ_t_kΔ T.
For sufficiently small Δ T, φ_t_k is a diffeomorphism
of Ω <cit.>. Using the notation Ω_t_k =
φ_t_k∘Ω_0, V(Ω_t_k) := {f | f ∘φ_t_k∈ V(Ω_0)} and by noting that q_t_k = φ_t_k∘ q_0 we obtain
a discrete analogue of (<ref>) where û_t_k∈ V(Ω_t_k):
a_Ω_t_k(û_t_k, v̂) = ∫_S^1∇φ_t_k∘ q_0
𝐧_q_0p̃_0 ·v̂∘ q_0 s, ∀v̂∈ V(Ω_t_k),
φ_t_k+1 = φ_t_k + û_t_kΔ T,
for k=0,…, T-1, where φ_t_k∘∂Ω_0 = owing to
the homogeneous Dirichlet boundary conditions implied by (<ref>).
At each time step k after the solution of (<ref>),
the mesh is moved according to (<ref>) upon which the equation (<ref>):
q_t_k+1 = q_t_k + u_t_k∘ q_t_kΔ T,
is automatically satisfied. The underlying coordinate field of the mesh itself
is chosen to be a Lagrange subspace of V(Ω_0), so that the map
q_0 ↦φ_t_k∘ q_0 is a diffeomorphism. At k=0, the assembly of the right-hand
side is in practice done by integration over q_0∘ S^1, which means that we can supply an
initial “momentum” signal 𝐩_0∈p̃_0∘ q_0∈ L^2(q_0∘ S^1) (now
defined over initial curve) to encode the entire geodesic flow of φ, and thereby of the
embedded curve. Figure <ref> show examples of forward integration of this system for
various 𝐩_0 and q_0∘ S^1 (the initial meshes were generated using
<cit.>). Note that the norm of the velocity present in
(<ref>) is confined to certain energy levels determined
by the initial momentum as the system is integrated. In the fully discrete
analogue we can only hope to establish approximate conservation of the Hamiltonian.
The importance of this nebulous since we only integrate over fixed time intervals,
and is subject to future work.
The computational cost of integrating (<ref>) is dominated
by the inversion of the discrete bilinear form. Mesh-based methods readily facilitate
parallel computations (e.g. matrix-vector products in a Krylov subspace method), which
along with preconditioning strategies are competitive with fast multipole methods. They
also offer flexibility in choosing bilinear form (which can be altered according to an
informed modelling choice or application). Finally, mesh adaptivity is also an
option. For the application at hand a graded mesh with a fine resolution in
the vicinity of the curve and coarser elements closer to the boundary can both
increase accuracy and the computational burden of the method.
§ INVERSE PROBLEM
We now consider the matching problem using the data misfit functional in (<ref>).
We wish to estimate the momentum _0 := p̃_0∘ q_0∈
that generates the curve t↦ q_t. That is, _0 is the momentum object
defined on the computational domain q_0 ∘ S^1. We drop explicit dependence on the template
as it remains fixed during computation as well as the time dimension of the initial
momentum. To ease the notation we use boldface to represent the smoothed version of the
indicator function on the interior of a curve q the i.e.:
= 𝒞1_q.
We define the forward operator:
↦ℱ() = := 𝒞1_q_1,
where q_1 is the solution at t=1 given by solving (<ref>)
using q_0 and as initial conditions, i.e. the time-1 flow map of the initial
curve. Given a target shape the inverse problem of interest is therefore to
recover the momentum ^* such that:
≈ℱ(^*) + ξ,
where the noise ξ∈𝒩(0, 𝒞) is a Gaussian measure with mean
zero and covariance operator 𝒞 defined later on.
To tackle this inverse problem, we use an Ensemble Kalman iteration. We let N denote the number
of ensemble members and let ^j, j=1,…,N denote the momenta corresponding
to ensemble member j. The ensemble mean momentum and the mean predicted shape are:
:= N∑_j=1^N _j, := N∑_j=1^N ^j,
where ^j = 𝒞1_ℱ(^j). The Kalman update operator is defined by:
= _ Q [_QQ + ξ I]^-1,
where ξ is a regularisation parameter described later and I is the identity operator.
The actions above are given by:
_QQ[·] = 1/N - 1∑_j=1^N (^j - ) ⟨^j - , ·⟩_L^2,
_ Q[·] = 1/N - 1∑_j=1^N (^j - ) ⟨^j - , ·⟩_L^2.
The data misfit at iteration k of the EKI is defined as:
𝔈^k = - _L^2(Ω)^2.
The prediction and analysis steps are summarised below:
* Prediction: For each ensemble member j, compute ^j = 𝒞1_ℱ(^j) and the average using (<ref>).
* Analysis: Update each ensemble momentum:
^j+1 = ^j + ( - ^j).
§ NUMERICAL EXPERIMENTS
We present numerical experiments showing that the ensemble Kalman
inversion (EKI) algorithm is able to find suitable approximations of a
target given a random initial ensemble. Section <ref>
describes how we generate the synthetic target that we will use as
matching targets. Section <ref> summarises
the parameters that we have chosen to in our experiments to match
the synthetic data, and section <ref> contains the
numerical results.
§.§ Synthetic data
For simplicity we fix the template curve throughout our experiments
and choose the unit circle. The initial mesh is that shown in the
top left vignette of Figure <ref>. The computation domain Ω_0
is a triangulation of [-10,10]^2 with mesh resolution[This is
the maximum diameter h_K of any triangle K in the triangulation.]
h=1. We have taken α=0.5 in (<ref>), T=15 time steps and
have used piecewise constant finite elements on the mesh skeleton to represent
(although we compute only with functions supported over
the submanifold q_0 ∘ S^1⊂). We use the forward
operator described previously to generate synthetic targets for
this set of parameters. Applying the forward operator ℱ to the
momenta in (<ref>) below we produce the targets
seen in Figure <ref>.
^†_contract = -1.38π,
^†_squeeze =
0.83π e^-y^2/5 x< -0.3
5/3πsin(x / 5)|y| otherwise,
^†_star = 2.6πcos(2π x/5),
^†_teardrop =
-3π sign(y) y<0
3π e^-x^2/5 otherwise.
With ^† we associate the following relative error at each iterate k:
ℛ^k = ^k - ^†_L^2(q_0∘ S^1) / ^†_L^2(q_0∘ S^1).
The consensus deviation 𝒮^k of an ensemble at iteration
k in equation (<ref>) is defined below:
𝒮^k = N∑_j=1^N ^j,k - ^k_L^2(q_0∘ S^1),
where ^j,k denotes the momentum of ensemble member j at
iteration k. This quantity is a useful diagnostic which measures the
information remaining in the ensemble after iteration k. Since EKI
relies on estimates of the forecast covariance, consensus is reached when
𝒮^k approaches zero, at which point the algorithm can be
stopped.
In all our simulations we invert the system in (<ref>)
using the direct solver MUMPS <cit.>; investigating a preconditioned
iterative solver is subject to future work. For details on the validation of the
implementation of the Wu-Xu element in Firedrake, see <cit.>
and the Zenodo entry <cit.>.
§.§ Experimental setup
We now describe the setup we have used to test the EKI.
Firstly, we have taken T=10 and α=1
so the parameters differ from those used to generate the synthetic
targets. Recall that EKI requires an initial ensemble, in this
case of momenta. The basis coefficients of the momenta was sampled
from a uniform distribution over the interval [-25,25], with different
realisations for each ensemble member. The parameter ξ in
(<ref>) determines the ratio between the influence
of the prediction covariance on the Kalman gain. We set ξ=10^-3
in (<ref>), although adaptive tuning of this
parameter to avoid overfitting is possible; an early termination rule
is suggested in <cit.>.
We choose 𝒞 = ( - κΔ), κ = 10
in (<ref>), as this smoothes out the mismatch sufficiently
for our computational domain.
The quality of the matching is directly related to the size and
variance of the ensemble as the solution is sought as a linear
combination of its members. We conduct experiments
for two ensemble sizes, N=20, N=40 and N=80. These
were chosen since, with the parameter set as above, = 48 in order to develop an understanding of how EKI performs when
the ensemble size is smaller than, similar to and larger than the dimension
of the state, while still keeping the overall computational cost such
that the experiments can be done in a reasonable amount of time.
The case where N<< is the de facto situation
for ensemble methods as the MC approximation allows for a computationally
feasible method. In general, small ensemble sizes can lead to filter
inbreeding (the forecast covariance is underestimated), filter divergence (the gain
does not adequately correct the ensemble), or spurious correlations
<cit.>. We comment on each of these later.
§.§ Results
We have run EKI 10 times for each value of N with different draws
for the initial ensemble to assess the robustness of the algorithm
with respect to the starting point. Figure <ref> shows examples of the numerical
results we obtain for curve matching using EKI. Note that only five
iterations of EKI were necessary to produce the results shown
in this section to reach a relative tolerance below 3%. Qualitatively
a larger ensemble size leads to a better match, which is to be expected.
Ensemble methods such as the EKI offer a practical advantage to gradient
methods given their inherent parallelisability. Indeed, the prediction
step discussed in Section <ref> can be done in parallel
for each ensemble member. We therefore start N processes corresponding to
each member, and used a Message Passing Interface
(MPI) <cit.> implementation to exchange information between
them (the MPI reduce operation, specifically). Thanks to this
parallelisation, five iterations of EKI takes less than two minutes for
N=20, five minutes for N=40 and 14 minutes for N=80 on a 2021 MacBook
Pro[Apple M1 Pro chip, 16 GB of memory.].
Figure <ref>, <ref> and
<ref> show the relative errors, data misfits and
consensus deviations for our experiments across the selected
targets and ensemble sizes. These all decrease at various rates in the early
iterations after which they stagnate. As the Kalman gain corrects the
ensemble members, and therefore the motion of their respective curves,
the data misfit decreases meaning that each member improves its prediction.
This increases consensus in the ensemble, which explains what is
seen in figure <ref>. We notice from the data misfits and
the momentum consensus that higher values of N provides a more accurate
approximation of the true momentum, which explains the accuracy of the
matches seen in figure <ref>. Note that the relative error, which
is a surrogate for posterior consistency, also decreases (albeit not with
a clear pattern across shapes and ensemble sizes). Since the forward operator
in use is heavily nonlinear, theoretical convergence results are not readily
obtained at this stage. We highlight that the Kalman gain is very efficient
in correcting the ensemble momenta, with the consensus decreasing exponentially
in the early stages of the algorithm. We comment on this later. The conclusion
is therefore that even at a modest ensemble size, EKI performs well. It is
not certain that the same behaviour that we see above (i.e. few iterations are
needed) will scale with N and the size of the problem, but the results are
promising for research in this direction.
Higher values of ξ were found to slow the convergence of the algorithm compared to the
selected value, which is consistent with the behaviour for landmark-based EKI
<cit.>, and we do not comment on it further. We noticed that
the value of κ also influences the convergence of the EKI; for small
values the operator - κΔ approaches the identity, and since
the mismatch X is computed from point evaluations of the finite element
function, information can be lost if the grid is not sufficiently refined.
A larger value of κ “spreads out” the mismatch which improves convergence
for coarser grids.
§ SUMMARY AND OUTLOOK
In this paper we have presented a parameterisation- and derivative-free
method for matching closed planar curves. A moving mesh discretisation
of Hamilton's equations for curves was described
using the induced diffeomorphism of a vector field occupying
the Wu-Xu finite element space. We also describe a
transformation theory for this element facilitating a computationally
performant forward model for use in the associated inverse problem.
Finding the momentum encoding the
forward motion of the template matches a desired curve was treated
as a Bayesian inverse problem in section <ref> and
EKI was used to approximate its solution. The numerical results presented in
section <ref> suggests that the method shows great
promise. Not only does is it easy to implement, the EKI is shown to
quickly reach ensemble consensus meaning that it is efficient in exploring
the subspace spanned by the initial ensemble. This is in part
thanks to the momentum being a one-dimensional signal on the template.
Treating the mismatch term in a negative Sobolev norm was shown to increase
both accuracy of our results and robustness to mesh resolution. We also
showed that the method is
robust to the choice of initial ensemble even when the ensemble size is
less than half the dimension of the forward problem. Further, assuming
the forward operator is scalable as the mesh is refined (large-scale PDE
solves are common in many areas of scientific computing, and the inverse
needed in the Kalman gain scales cubically in N <cit.>).
Future work includes proving convergence of the finite element
discretisation for (<ref>) and
subsequently using these error estimates to quantify error in a rigorous
treatment of the Bayesian inverse problem
<cit.>. As indicated in <cit.>,
some challenges exist for nonconforming finite element methods with
singular source terms.
The template considered in this paper is a piece-wise linear curve. An
obvious extension would be to apply isoparametric methods to cater for
piece-wise higher-order polynomial curves. The effect of this would only
affect the right-hand side and would not affect regularity results for
the velocity. An advantage of the finite element method for curves is also
that it allows for adaptivity e.g. refinement of the mesh only
in the vicinity of the embedded template.
We considered problems of modest size to illustrate the discretisation
and the EKI. As the mesh is refined, it is likely the case that the
dimension of the forward operator dwarfs the size of the ensemble and effects
of the MC approximation are more pronounced. This is the case for ensemble methods
for e.g. numerical weather prediction and several techniques exist to counter
these effects <cit.> (e.g. localisation or covariance
inflation). In particular, localisation methods may be suitable to assume conditional
independence between separated states (i.e. parts of the shape that are distant
in physical space) so as to counter spurious correlations.
§ PROOF OF THEOREM <REF>
The momentum satisfies
ṗ_t + ∇ u_t∘ q_t p_t = 0
Using the ansatz we verify:
ṗ_t + ∇ u_t∘ q_t p_t = J̇_̇ṫ p_0 + ∇ u_t
J_t p_0
= - J_t d(J_t) J_t p_0 + ∇ u_t∘ q_t J_t p_0
= - J_t (d J_t) J_t p_0 + ∇ u_t∘ q_t J_t p_0
= - J_t (∇ u_t∘ q_t J_t) J_t p_0 + ∇ u_t∘ q_t J_t p_0
= - J_t J_t∇ u_t∘ q_t J_t p_0 + ∇ u_t∘ q_t J_t p_0
= - ∇ u_t J_t p_0 + ∇ u_t∘ q_t J_t p_0
= 0 .
elsarticle-num
|
http://arxiv.org/abs/2307.04186v1 | 20230709142509 | Absolute Concentration Robustness and Multistationarity in Reaction Networks: Conditions for Coexistence | [
"Nidhi Kaihnsa",
"Tung Nguyen",
"Anne Shiu"
] | math.DS | [
"math.DS",
"math.AG",
"q-bio.MN",
"92E20, 37N25, 26C10, 34A34, 34C08"
] |
Absolute concentration robustness and multistationarity]Absolute concentration robustness and multistationarity in reaction networks: Conditions for coexistence
[
Nidhi Kaihnsa, Tung Nguyen, and Anne Shiu
Received: date / Accepted: date
=============================================
Many reaction networks arising in applications are multistationary, that is, they have the capacity for more than one steady state; while some networks exhibit absolute concentration robustness (ACR), which means that some species concentration is the same at all steady states.
Both multistationarity and ACR are significant in biological settings,
but only recently has attention focused on the possibility for these properties to coexist.
Our main result states that such coexistence
in at-most-bimolecular networks (which encompass most networks arising in biology)
requires at least 3 species, 5 complexes, and 3 reactions.
We prove additional bounds on the number of reactions
for general networks based on the number of linear conservation laws.
Finally, we prove that, outside of a few exceptional cases, ACR is equivalent to non-multistationarity for bimolecular networks that are small (more precisely, one-dimensional or up to two species). Our proofs involve analyses of systems of sparse polynomials,
and we also use classical results from chemical reaction network theory.
0.1in
Keywords: Multistationarity, absolute concentration robustness, reaction networks, sparse polynomials.
2020 MSC:
92E20,
37N25,
26C10,
34A34,
34C08
§ INTRODUCTION
A mass-action kinetics system exhibits absolute concentration robustness (ACR) if the steady-state value of at least one species is robust to fluctuations in initial concentrations of all species <cit.>.
Another biologically significant property is
the
existence of multiple steady states, that is, multistationarity. Significantly, this property has been linked to cellular decision-making and switch-like responses <cit.>.
As both ACR and multistationarity are important properties, it is perhaps surprising that their relationship was explored only recently, when the present authors with Joshi showed that
ACR and multistationarity together – or even ACR by itself – is highly atypical in randomly generated reaction networks. This result dovetails with the fact that the two properties are somewhat in opposition, as multiple steady states are not in general position in the presence of ACR.
The results of Joshi et al.
are asymptotic in nature
(as the number of species goes to infinity),
and they pertain to networks that are at-most-bimolecular (which is typical of networks arising in biology) and reversible (which is not) <cit.>.
This naturally leads to the following question:
For multistationarity and ACR to coexist, how many species, reactions, and complexes are needed?
Which networks (without the requirement of being reversible) of small to medium size allow such coexistence?
Another motivation for Question <ref> comes from synthetic biology. In order to design reaction networks with certain dynamical properties, we need to better understand the design principles that allow for such behaviors, as well as the constraints on the size (such as the minimum numbers of species, reaction, and complexes) of such networks.
Our work focuses on answering Question <ref>.
Broadly speaking, our results fall into two categories:
(i) results that give lower bounds on the dimension of a network or its number of species, reactions, or complexes;
and (ii) results for certain classes of networks (one-dimensional, up to 2 species, and so on).
Our primary focus is on at-most-bimolecular networks, but we also present results on general networks.
In the first category,
our results are summarized in the following theorem, which gives some minimum requirements for ACR and nondegenerate multistationarity to coexist. This coexistence is typically on a nonzero-measure subset of the parameter space of reaction rate constants.
Let G be an at-most-bimolecular reaction network with n species
such that there exists a vector of positive rate constants κ^*
such that the mass-action system (G,κ^*) has ACR and is nondegenerately multistationary. Then G has:
* at least 3 species (that is, n ≥ 3),
* at least 3 reactant complexes (and hence, at least 3 reactions) and at least 5 complexes (reactant and product complexes), and
* dimension at least 2.
If, additionally, G is full-dimensional (that is, G has no linear conservation laws), then G has:
* at least n+2 reactant complexes (and hence, at least n+2 reactions), and
* dimension at least 3.
For the proof of Theorem <ref>, we refer the reader to Section <ref> for part (3) (Lemma <ref>);
Section <ref>
for parts (1), (2), and (5) (Theorem <ref>); and
Section <ref> for part (4) (Theorem <ref>).
Additionally, many of the lower bounds in Theorem <ref> are tight.
Indeed, this is shown for parts (1)–(3) through the following network:
{A+B → 2C → 2B, C→ A } (Example <ref>).
As for part (4), this bound is proven for networks that need not be at-most-bimolecular, and its tightness is shown in that context (Proposition <ref>).
While Theorem <ref> concerns nondegenerate multistationarity, we also investigate the capacity for ACR together with degenerate multistationarity, specifically, in networks with 4 reactant complexes (Proposition <ref>).
Finally, we prove two additional results in the spirit of Theorem <ref>.
The first states that 3 is the minimum number of pairs of reversible reactions needed (in reversible networks)
for multistationarity, even without ACR (Theorem <ref>).
The second concerns networks that are not full-dimensional, and states the minimum number of reactant complexes needed for the coexistence of ACR and nondegenerate multistationarity
is n-k+1, where 1 ≤ k ≤ n-2 is the number of linearly independent conservation laws
(Theorem <ref>).
As for our second category of results, we start with one-dimensional networks,
a class of networks for which ACR <cit.>, multistationarity <cit.>, and even multistability <cit.> is well studied.
Such networks do not allow for the coexistence of ACR and nondegenerate multistationarity (Proposition <ref>). Moreover, one-dimensional bimolecular networks can only be multistationary if they are degenerately so (Lemma <ref>). Moreover, we explicitly characterize all such degenerate networks (Lemma <ref>).
Here our proofs make use of recent results of Lin, Tang, and Zhang <cit.>.
Another class of at-most-bimolecular networks we analyze are those with exactly 2 species (Section <ref>).
For such networks that are reversible, we characterize the property of
unconditional ACR, which means that ACR occurs for all possible values of rate constants
(Theorem <ref>).
As for networks that need not be reversible, we show that ACR and multistationarity can coexist, but only in a degenerate way.
Moreover, up to relabelling species, only two such networks allow such coexistence for a nonzero-measure subset of the space of reaction rate constants (Theorem <ref>).
Our works fits into a growing body of literature that explores the minimal conditions needed for various dynamical behaviors,
including the two properties that are the focus of the current work:
multistationarity <cit.>
and ACR <cit.>. There are additional such studies on
multistability <cit.>
and
Hopf bifurcations <cit.> (which generate periodic orbits). For instance, in analogy to Theorem <ref> above, the presence of Hopf bifurcations requires an at-most-bimolecular network to have at least 3 species, 4 reactions, and dimension 3 <cit.>.
This article is organized as follows: Section <ref> introduces reaction networks, multistationarity, and ACR. Section <ref> contains several results on steady states and their nondegeneracy. We use these results in Sections <ref> and <ref> to prove our main results.
We conclude with a discussion in Section <ref>.
§ BACKGROUND
This section recalls the basic setup and definitions involving reaction networks (Section <ref>),
the dynamical systems they generate (Section <ref>), absolute concentration robustness (Section <ref>), and a concept pertaining to networks with only 1 species: “arrow diagrams” (Section <ref>).
§.§ Reaction networks
A reaction network G is a directed graph in which the vertices are non-negative-integer linear combinations of species X_1, X_2, …, X_n. Each vertex is a complex, and we denote the complex at vertex i by y_i=∑_j=1^ny_ijX_j (where y_ij∈_≥ 0) or y_i=(y_i1, y_i2, …, y_in ).
Throughout, we assume that each species X_i, where i=1,2,…,n, appears in at least one complex.
Edges of a network G are reactions, and it is standard to represent a reaction (y_i, y_j) by y_i → y_j.
In such a reaction, y_i is the reactant complex, and y_j is the product complex.
A species X_k is a catalyst-only species in reaction y_i → y_j if y_ik = y_jk.
In examples, it is often convenient to write species as A,B,C,… (rather than X_1,X_2,X_3,…) and also to view a network as a set of reactions, where the sets of species and complexes are implied.
The reaction network { 0 ← A → 2A , B ← A+B} has 2 species, 5 complexes, and 3 reactions. The species B is a catalyst-only species in the reaction B ← A+B.
A reaction network is reversible if every edge of the graph is bidirected.
A reaction network is weakly reversible if every connected component of the graph is strongly connected.
Every reversible network is weakly reversible.
The following network is reversible:
{
A+B ⇆ 2A, 2B ⇆ A, 0 ⇆ B }.
One focus of our work is on at-most-bimolecular reaction networks (or, for short, bimolecular), which means that every complex y_i satisfies
y_i1+y_i2 + … + y_in≤ 2.
Equivalently, each complex has the form 0, X_i, X_i+X_j, or 2X_i (where X_i and X_j are species). The networks in Examples <ref>–<ref> are bimolecular.
§.§ Mass-action systems
Let r denote the number of reactions of G.
We write the i-th reaction as y_i → y_i'
and assign to it a positive
rate constant
κ_i∈_> 0.
The mass-action system arising from a network G and a vector of positive rate constants κ=(κ_1, κ_2, …, κ_r), which we denote by (G,κ), is the following dynamical system arising from mass-action kinetics:
dx/dt = ∑_i=1^r κ_i x^y_i (y_i'-y_i) =: f_κ(x) ,
where x^y_i := ∏_j=1^n x_j^y_ij.
Observe that the right-hand side of the ODEs (<ref>) consists of polynomials
f_κ,i,
for i=1,…,n. For simplicity, we often write f_i instead of f_κ,i.
The question of which polynomials f_i can appear as right-hand side of mass-action ODEs is answered in the following result <cit.>.
Let f:ℝ^n →ℝ^n be a polynomial function, that is, assume that f_i ∈ℝ[x_1, x_2 …, x_n] for i=1,2,…, n. Then f arises as the right-hand side of the differential equations (<ref>) (for some choice of network G and vector of positive rate constants κ) if and only if, for all i=1,2,…,n, every monomial in f_i with negative coefficient is divisible by x_i.
Next, observe that the mass-action ODEs (<ref>) are in the linear subspace of ℝ^n spanned by all reaction vectors y_i' - y_i (for i=1,2,…, r). We call this the stoichiometric subspace and denote it by S.
The dimension of a network is the dimension of its stoichiometric subspace.
(This dimension is sometimes called the “rank” <cit.>.)
In particular, if (S)=n (that is,
S= ℝ^n), we say that G is full-dimensional.
A trajectory x(t) of (<ref>) with initial condition x(0) = x^0 ∈_>0^n remains, for all positive time, in the following stoichiometric compatibility class of G <cit.>:
P_x(0) := (x(0)+S)∩_≥ 0^n .
For full-dimensional networks, there is a unique stoichiometric compatibility class: P=_≥ 0^n. For networks that are not full-dimensional, every nonzero vector w in S^⊥ yields a (linear) conservation law ⟨ w,x ⟩ = ⟨ w, x(0) ⟩ that is satisfied by every x ∈ P_x(0), where ⟨ -,- ⟩ denotes the usual inner product on ^n.
[<ref>]
The network { 0 κ_1← A κ_2→ 2A , B κ_3← A+B}
has a one-dimensional stoichiometric subspace (spanned by (1,0)) and generates the following mass-action ODEs (<ref>):
dx_1/dt = -κ_1x_1+κ_2x_1-κ_3x_1 x_2=x_1(-κ_1+κ_2-κ_3 x_2)
dx_2/dt = 0.
Observe that the negative monomials in the first ODE are -κ_1x_1 and -κ_3x_1 x_2, and each of these is divisible by x_1, which is consistent with Lemma <ref>.
Next, the stoichiometric compatibility classes (<ref>) are rays of the following form (where T>0):
{ (x_1,x_2) ∈ℝ^2_≥ 0| x_2 = T} .
The equation x_2 = T is the unique (up to scaling) conservation law.
A steady state of a mass-action system is a non-negative vector x^*∈_≥ 0^n at which the right-hand side of the ODEs (<ref>) vanishes: f_κ(x^*)=0.
Our main interest in this work is in positive steady states x^*∈_> 0^n.
The set of all positive steady states of a mass-action system
can have positive dimension in ^n,
but this set typically intersects each stoichiometric compatibility class in finitely many points <cit.>.
Finally, a steady state x^* is nondegenerate if (df_κ(x^*)|_S)=S, where df_κ(x^*) is the Jacobian matrix of f_κ evaluated at x^*.
We consider multiple steady states at two levels: systems and networks.
A mass-action system (G,κ) is multistationary (respectively, nondegenerately multistationary)
if there exists a stoichiometric compatibility class having more than one positive steady state (respectively, nondegenerate positive steady state).
A reaction network G is multistationary if there exists a vector of positive rate constants κ such that (G,κ) is multistationary.
For a reaction network G, we let cap_pos(G)
(respectively, cap_nondeg(G))
denote the maximum possible number of positive steady states (respectively, nondegenerate positive steady states) in a stoichiometric compatibility class.
[<ref>]
We return to the network G={ 0 κ_1← A κ_2→ 2A , B κ_3← A+B}
and its ODEs (<ref>). A direct computation reveals that when κ_1 ≥κ_2, there is no positive steady state.
On the other hand, when κ_2 > κ_1, the steady states form exactly one stoichiometric compatibility class (<ref>) – namely, the one given by T = (κ_2 - κ_1)/κ_3 – and all such steady states are degenerate. Hence, G is multistationary but not nondegenerately multistationary.
[<ref>]
The following (full-dimensional) reaction network and indicated rate constants yield a mass-action system with 3 nondegenerate positive steady states <cit.>:
{
A+B [1/4]1/32⇆ 2A, 2B [1/4]1⇆ A, 0 [1]1⇆ B } .
Therefore, this network is nondegenerately multistationary.
§.§ Deficiency and absolute concentration robustness
The deficiency of a reaction network G is
δ = m - ℓ- (S), where m is the number of vertices (or complexes), ℓ is the number of connected components of G (also called linkage classes), and S is the stoichiometric subspace.
The deficiency is always non-negative <cit.>,
and it plays a central role in many classical results on the dynamical properties of mass-action systems <cit.>.
Two such results are stated below. These results, which are due to Feinberg and Horn <cit.>, are stated for weakly reversible networks (the setting in which we use these results later).
Deficiency-zero networks are not multistationary. Moreover, if
G is a weakly reversible network with deficiency zero, then for every vector of positive rate constants κ,
the mass-action system (G,κ) admits a unique positive steady state in every stoichiometric compatibility class.
Consider a weakly reversible network G with connected components (linkage classes) G_1, G_2, …, G_ℓ.
Let δ denote the deficiency of G, and (for all i=1,2,…, ℓ) let δ_i denote the deficiency of G_i.
Assume the following:
* δ_i ≤ 1 for all i=1,2,…,ℓ, and
* δ_1+δ_2+ … + δ_ℓ = δ.
Then G is not multistationary:
for every vector of positive rate constants κ,
the mass-action system (G,κ) admits a unique positive steady state in every stoichiometric compatibility class.
Our next topic, ACR, like multistationarity, is analyzed at the level of systems and also networks.
Let X_i be a species of a reaction network G with r reactions.
* For a fixed vector of positive rate constants κ∈ℝ^r_>0,
the mass-action system (G,κ) has absolute concentration robustness (ACR) in X_i if (G,κ) has a positive steady state and in every positive steady state x ∈_> 0^n of the system, the value of x_i is the same. This value of x_i is the ACR-value of X_i.
* The reaction network G
has unconditional ACR in species X_i if, for every vector of positive rate constants κ∈ℝ^r_>0, the mass-action system (G,κ) has ACR in X_i.
ACR requires the existence of a positive steady state (Definition <ref>(1)).
This requirement is sometimes not included in definitions of ACR in the literature.
However, this is not an extra requirement for some of the networks we consider, namely,
weakly reversible networks, for which
positive steady states are guaranteed to exist (see Deng et al. <cit.> and Boros <cit.>).
The property of unconditional ACR is often too restrictive.
Thus, many of our results focus on ACR (or other properties) that hold for some full-dimensional subset of the parameter space of rate constants ^r_> 0 (where r is the number of reactions of a given network).
The Lesbesgue measure of such a subset is nonzero. For simplicity, we use “measure” to mean Lebesgue measure.
[<ref>]
We revisit
the network { 0 κ_1← A κ_2→ 2A , B κ_3← A+B}.
From our earlier analysis, the mass-action system has ACR in B when κ_2 > κ_1 (which defines a nonzero-measure subset of the rate-constants space ^3_> 0), but lacks ACR when
κ_2 ≤κ_1 (as there are no positive steady states).
Consider the following network G, which is bimolecular and full-dimensional:
{
2X_2 X_2 [κ_2]κ_1⇄ X_1+X_2 X_1 }.
The mass-action ODEs are as follows:
ẋ_1
= κ_1x_2 -κ_2x_1 x_2
=
(κ_1-κ_2x_1)x_2
ẋ_2
= κ_3x_2-κ_4x_1x_2
=
(κ_3-κ_4x_1)x_2 .
When
κ_1κ_2≠κ_3κ_4, there are no positive steady states and hence no ACR.
Now assume κ_1κ_2=κ_3κ_4.
In this case,
the positive steady states are defined by the line x_1=κ_1κ_2, and so the system is multistationary and has ACR in species X_1. However, all the steady states of this system are degenerate.
Consider the following network <cit.>, which we call G:
{
A [κ_1]κ_2⇆ A + B,
2B [κ_3]κ_4⇆ 3B,
A
[κ_5]κ_6⇆ 2A
} .
The mass-action ODEs (<ref>) are as follows:
dx_1/dt = κ_5x_1 - κ_6x_1^2
dx_2/dt = κ_1x_1-κ_2x_1x_2+κ_3x_2^2-κ_4x_2^3.
It follows that G has unconditional ACR in species A with ACR-value κ_5/κ_6 (the existence of positive steady states comes from the fact that G is reversible; recall Remark <ref>).
The following result, which is <cit.>, concerns ACR in one-dimensional networks.
Let G be a one-dimensional network with species X_1, X_2, …, X_n.
If G has unconditional ACR in some species X_i^*, then
the reactant complexes of G differ only in species X_i^*
(more precisely, if y and y are both reactant complexes of G, then y_i=y_i for all
i ∈{1,2,…,n}∖{i^* }).
§.§ Arrow diagrams
In this subsection, we recall the arrow diagrams associated to one-species networks. These diagrams are useful for stating results about such
networks <cit.>.
Let G be a reaction network with only one species X_1. Let m denote the number of (distinct) reactant complexes of G, which we list in increasing order of molecularity: a_1 X_1, a_2X_1, …, a_m X_1 (so, a_1 < a_2 < … < a_m). The arrow diagram of G is the vector
ρ = (ρ_1, ρ_2, … , ρ_m) ∈{→ , ←, }^m defined by:
ρ_i
:= {[ → if for every reaction a_i X_1 → bX_1 in G, the inequality b > a_i holds; ← if for every reaction a_i X_1 → bX_1 in G, the inequality b < a_i holds; otherwise. ].
* The network {0 ← A, 2A → 3A} has arrow diagram (←, →).
* The network {0 ← A, A → 2A, 2A → 3A} has arrow diagram (, →).
It is often useful to consider the arrow diagrams of “embedded” one-species networks, as follows.
Let G be a reaction network with species X_1, X_2, …, X_n. Given a species X_i, the corresponding embedded one-species network of G is obtained by deleting some (possibly empty) subset of the reactions, replacing each remaining reaction a_1 X_1 + a_2 X_2 + … +a_s X_s → b_1 X_1 + b_2 X_2 + … +b_s X_s by the reaction a_i X_i → b_i X_i, and then deleting any trivial reactions (i.e, reactions of the form a_i X_i → a_i X_i, in which the reactant and product complexes are equal) and keeping only one copy of duplicate reactions.
Consider the network G={ 0⇆ B → A }. The following networks are embedded one-species networks of G:
{0 → B}, {0 ← B }, {0 ⇆ B }, and {0 → A}.
§ RESULTS ON STEADY STATES AND NONDEGENERACY
This section contains results on the steady states of mass-action systems. We use these results in later sections
to prove our main results.
Section <ref> analyzes the steady states of full-dimensional networks,
while Section <ref>
pertains to non-full-dimensional networks.
Next, Section <ref> focuses on bimolecular networks and investigates scenarios in which the right-hand side of a mass-action ODE vanishes.
Finally,
Section <ref> concerns bimolecular networks that are reversible.
§.§ Full-dimensional networks
Consider a reaction network G with
n species, r reactions, and exactly j reactant complexes[ A network has exactly j reactant complexes if the set of distinct reactant complexes has size j.]; and let κ^* ∈ℝ^r_>0 be a vector of positive rate constants.
We often rewrite the mass-action ODE system (<ref>) for (G,κ^*)
as follows:
[ dx_1/dt; dx_2/dt; ⋮; dx_n/dt; ] =
N
[ m_1; m_2; ⋮; m_j ] ,
where N is an (n × j)-matrix (with real entries) and m_1,m_2,…,m_j are distinct monic monomials in x_1, x_2, …,x_n given by the reactant complexes.
[<ref>]
The network
{
2X_2 X_2 [κ_2]κ_1⇄ X_1+X_2 X_1 }
has two reactant complexes, which yield the monomials m_1:=x_2 and m_2:=x_1x_2.
Consider (κ_1,κ_2,κ_3,κ_4)=(1,2,3,6) (so, κ_1κ_2=κ_3κ_4 holds). Now the matrix N, as in (<ref>), is as follows:
N := [ 1 -2; 3 -6 ] .
This matrix N does not have full rank, and we saw earlier that all steady states of this mass-action system are degenerate. In the next result, part (1) asserts that this phenomenon holds in general.
Let G be a full-dimensional reaction network with n species, and κ^* be a vector of positive rate constants.
Let N be a matrix defined, as in (<ref>), by the mass-action ODE system of (G,κ^*).
* If rank(N) ≤ n-1, then every positive steady state of (G,κ^*) is degenerate.
* If rank(N) = n and G has exactly n+1 reactant complexes, then the positive steady states of (G,κ^*)
are the positive roots of a system of binomial equations (sharing some common monomial m_0) of the following form:
m_i - β_i m_n+1 = 0 for i=1,2,…, n ,
where β_1,β_2, …, β_n∈ℝ and m_1, … , m_n+1 are distinct monic monomials in x_1, x_2, …,x_n.
* If G has exactly n+1 reactant complexes and (G,κ^*) has a nondegenerate, positive steady state, then (G,κ^*) is not multistationary.
Assume (G,κ^*) is a full-dimensional mass-action system in n species, and let N be as in (<ref>).
First, we prove (1). Assume rank(N) ≤ n-1, and let x^* be a positive steady state.
It follows that the polynomials f_i, as in (<ref>), are linearly dependent (over ℝ). Hence,
the Jacobian matrix – even before evaluating at x^* – has rank less than n. Thus, the image of the Jacobian matrix, after evaluating at x^*, has dimension less than n, i.e, (df(x^*)|_S)≠ℝ^n =S.
Hence, x^* is degenerate.
Next, we prove (2).
As in equation (<ref>), we write the mass-action ODEs for (G,κ^*) as
[ dx_1/dt; dx_2/dt; ⋮; dx_n/dt; ] =
N
[ m_1; ⋮; m_n; m_n+1; ] ,
where N is n × (n+1) and the m_i's are distinct monic monomials in x_1,x_2, …,x_n.
As G is full-dimensional and rank(N) =n, we can relabel the m_i's, if needed, so that the square sub-matrix of N formed by the first n columns has rank n. Thus, by row-reducing N, we obtain a matrix of the following form (where β_1, β_2, …,β_n ∈ℝ):
N' := [[ -β_1; I_n -β_2; ⋮; -β_n; ]] .
We conclude from the above discussion that the positive steady states of (G,κ^*) are the positive roots of the following n binomial equations (which are in the desired form):
m_i - β_i m_n+1 = 0 for i=1,2,…, n .
Before moving on to part (3), we summarize what we know (so we can use it later).
The positive steady states are the roots of the binomials (<ref>),
which we rewrite using Laurent monomials (our interest is in positive roots, so there is no issue of dividing by zero):
x_1^a_i1
x_2^a_i2…
x_n^a_in := m_i/m_n+1 = β_i for i=1,2,…, n .
We apply the natural log to (<ref>) and obtain the following, which involves the n × n matrix A:=(a_ij):
A
[ ln(x_1); ln(x_2); ⋮; ln(x_n) ] = [ ln(β_1); ln(β_2); ⋮; ln(β_n) ] =: ln (β) .
Now we prove (3). Assume x^* is a nondegenerate, positive steady state. (We must show that no other positive steady states exist.) By part (1), the n × (n+1) matrix N has rank n, so the proof of part (2) above applies.
Assume for contradiction that x^** is a positive steady state, with x^**≠ x^*. Then, by (<ref>),
the linear system Ay=ln(β) has more than one solution, and so rank(A) ≤ n-1. It follows that the set of positive steady states,
{(e^y_1, e^y_2, …, e^y_n) | Ay=ln(β)}, is positive-dimensional and so (by the Inverse Function Theorem and the fact that G is full-dimensional) all positive steady states of (G,κ^*) are degenerate. This is a contradiction, as x^* is nondegenerate.
For algebraically inclined readers, observe that the equations in Proposition <ref>(2) define a toric variety.
Additionally, every such variety has at most one irreducible component that intersects the positive orthant <cit.>.
This fact can be used to give a more direct proof of Proposition <ref>(3).
The end of the proof of
Proposition <ref>
concerns nondegenerate positive steady states
and their relation to the dimension of the set of positive steady states. More ideas in this direction are explored in the recent work of Feliu, Henriksson, and Pascual-Escudero <cit.>.
Let G be a full-dimensional reaction network with n species, let κ^* be a vector of positive rate constants,
and let f_1,f_2,…, f_n denote the right-hand sides of the mass-action ODEs of (G,κ^*).
If f_i is the zero polynomial, for some i∈{1,…,n}, then
every positive steady state of (G,κ^*) is degenerate.
This result follows directly from Proposition <ref>(1) and the fact that, in this case, the rank of N, as in (<ref>), is strictly less than n.
The next two results pertain to networks with few reactant complexes (at most n, where n is the number of species)
and many reactant complexes (at least n), respectively.
Let G be a reaction network with n species.
* If G has exactly 1 reactant complex, then, for every vector of positive rate constants κ^*, the mass-action system (G, κ^*) has no positive steady states.
* If G has exactly j reactant complexes, where 2 ≤ j ≤ n (in particular, n ≥ 2), and G is full-dimensional, then every positive steady state (of every mass-action system defined by G) is degenerate.
Assume G has n species, which we denote by X_1, X_2, …, X_n, with exactly j reactant complexes, for some 1≤ j ≤ n.
Let κ^* be a vector of positive rate constants.
As in (<ref>), we write the mass-action ODE system arising from (G, κ^*) as follows:
[ dx_1/dt; ⋮; dx_n/dt; ] =
N
[ m_1; ⋮; m_j; ] =: [ f_1; ⋮; f_n; ]
,
where N:=(N_ij) is an (n × j)-matrix (with entries in ℝ)
and m_1, …, m_j are distinct monic monomials in x_1, …, x_n
(as G has n species and j reactant complexes).
We first prove part (1). In this case, the right-hand sides of the ODEs have the form f_i = c_i ∏_k=1^nx_k^a_k, with at least one c_i≠ 0. It follows that there are no positive steady states.
We prove part (2). Assume that G is full-dimensional
(the stoichiometric subspace
is ℝ^n)
and that 2 ≤ j ≤ n.
Let x^*=(x_1^*,x_2^*,…,x_n^*) be a positive steady state. We must show x^* is degenerate.
We first consider the subcase when
the rank of the matrix N is
at most (n-1).
By Proposition <ref>(1), every positive steady state is degenerate.
Now we handle the remaining subcase, when N has rank n (and hence, N is n× n).
Now, solving the steady-state equations f_1= … = f_n=0 can be accomplished by multiplying the expression in (<ref>) by N^-1, which implies that every monomial m_1,…, m_n evaluates to zero at steady state. Hence, no positive steady states exist.
If G is a full-dimensional network with n species
and exactly j reactant complexes, where j ≥ n, then:
* There exists a vector of positive rate constants κ^*, such that the corresponding matrix N, as in (<ref>), has rank n.
* If there exists a vector of positive rate constants κ^* such that the matrix N does not have rank n, then there exists
a vector of positive rate constants
κ^** such that
(G,κ^**) has no positive steady states.
Assume G is full-dimensional, with n species, r reactions (denoted by y_1 → y_1', … y_r → y_r'),
and exactly j reactant complexes, where j ≥ n.
We begin with part (1).
Let κ=(κ_1,…, κ_r) denote the vector of unknown rate constants (each κ_i is a variable).
Let N be the
(n × j)
matrix for (G,κ) in the sense of N in (<ref>).
More precisely,
the entries of N are ℤ-linear combinations of the κ_i's,
such that, for every vector of positive rate constants κ^* ∈_> 0^r, the evaluation N|_κ=κ^* is the matrix N as in (<ref>) for (G, κ^*).
As G is full-dimensional, there are no ℝ-linear relations among the n rows of N.
Hence, the size-n minors of N
define a (possibly empty) measure-zero subset V ⊆^r_> 0.
Thus, ^r_> 0∖ V is nonempty, and every κ^* ∈^r_> 0∖ V yields a matrix N=N|_κ=κ^* with rank n.
This proves part (1).
For part (2), suppose that there exists κ^* ∈_> 0^r such that
the resulting matrix
N has rank strictly less than n.
It follows that there is a linear relation:
c_1f_κ^*,1+… + c_nf_κ^*,n = 0 ,
where c_1, …, c_n are real numbers – not all 0 – and the f_κ^*,i denote the right-hand sides of the mass-action ODEs for (G,κ^*).
On the other hand, for unknown rate constants κ, as in the proof above for part (1),
c_1f_κ,1+… + c_nf_κ,n is not the zero polynomial. Thus, when we rewrite this expression as a sum over r reactions y_i → y_i' as follows: c_1f_κ,1+… + c_nf_κ,n
= d_1 κ_1 x^y_1 + … + d_r κ_r x^y_r, where d_i ∈ℤ for all i, we conclude that d_i ≠ 0 for some i. By relabeling reactions, if needed, we may assume that i=1.
Now consider the following vector of positive rate constants
κ_ϵ^* := (κ_1^* + ϵ ,
κ_2^*, …, κ_r^*), for some ϵ>0.
Assume for contradiction that (G,κ_ϵ^*) has a positive steady state x^*. At steady state,
f_κ^*_ϵ,i evaluates to 0, for all i, and this yields the first equality here:
0 = ( c_1f_κ^*_ϵ,1+… + c_nf_κ^*_ϵ,n) |_x=x^* = c_1f_κ^*,1|_x=x^*+… + c_nf_κ^*,n|_x=x^* + ϵ d_1 x^y_1|_x=x^* = ϵ d_1 x^y_1|_x=x^* ,
and the second and third equalities come from the fact that the mass-action ODEs are linear in the rate constants and from equation (<ref>), respectively.
We obtain x^y_1|_x=x^*=0, which contradicts the fact that x^* is a positive steady state. This concludes the proof.
The next proposition returns to a topic from Proposition <ref>, namely, networks with n species and n+1 reactant complexes.
Assume G is a full-dimensional network, with n species and exactly n+1 reactant complexes,
which we denote as follows:
y_i1 X_1 + y_i2 X_2 + … y_in X_n for i=1,2,…,n+1 .
Let A denote the n × n matrix obtained from the
(n+1) × n matrix
Y:=(y_ij) by subtracting the last row from every row and then deleting the last row.
* If rank(A) = n, then G is not nondegenerately multistationary.
* If rank(A) ≤ n-1,
then there exists a vector of positive rate constants κ^* such that (G,κ^*) has no positive steady states.
Case 1: rank(A) = n. Fix an arbitrary vector of positive rate constants κ^*. We must show that (G, κ^*) is not nondegenerately multistationary.
Let N denote the n × (n+1) matrix defined by (G, κ^*), as in (<ref>).
We consider two subcases.
Subcase: rank(N) ≤ n-1. In this subcase, Proposition <ref>(1)
implies that every positive steady state of (G, κ^*) is degenerate, and so (G, κ^*) is not nondegenerately multistationary.
Subcase: rank(N) = n.
Part (2) of Proposition <ref> pertains to this setting, so we can follow that proof.
In particular,
equation (<ref>) – the (n × n) matrix A there exactly matches the matrix A here – implies that the positive steady states are defined by
a linear system of the form Ay=ln(β), where y=(ln(x_1), …, ln(x_n))^⊤.
Hence, as rank(A) = n, we have at most one positive steady state and so (G, κ^*) is not multistationary.
Case 2: rank(A) ≤ n-1. We must show that there exists a choice of rate constants so that the resulting system has no positive steady states.
Proposition <ref>(1)
implies that
there exists κ^* such that the following holds:
the matrix N defined by (G,κ^*) has (full) rank n.
Fix such a choice of κ^*.
If (G, κ^*) has no positive steady states, then we are done. Therefore, for the rest of the proof, we assume that (G, κ^*) admits a positive steady state.
In what follows, we need to consider additional vectors of positive rate constants (besides κ^*) and their corresponding matrices N, as in (<ref>). Therefore, as in the proof of Proposition <ref>(1), let κ=(κ_1,…, κ_r) (where r is the number of reactions) denote the vector of unknown rate constants, and let N be the
n × (n+1)
matrix for (G,κ) in the sense of N in (<ref>), so that
for every vector of positive rate constants κ^* ∈_> 0^r, the evaluation N|_κ=κ^* is the matrix N as in (<ref>).
We now follow the ideas in the proof of Proposition <ref>, part (2), with the difference being that we now consider unknown rate constants κ. The mass-action ODEs for (G,κ) are given by:
[ dx_1/dt; ⋮; dx_n/dt; ] = N[ m_1; ⋮; m_n+1; ] ,
where m_1,…, m_n+1 are distinct monic monomials in x_1,x_2, …,x_n.
Our next aim is to row-reduce N (over the field ℚ(κ_1,…, κ_r)).
Accordingly, for 1≤ k ≤ n+1, let [B_k] denote the determinant of the matrix obtained from N by removing the k-th column. By construction, each [B_k] is
in ℤ[κ_1,…, κ_r].
We claim that, for all 1 ≤ k ≤ n+1, the polynomial [B_k] is nonzero. By symmetry among the monomials m_i, it suffices to show that [B_n+1] is nonzero. To show this, assume for contradiction that [B_n+1]=0. Then, N can be row-reduced to a matrix in which the last row has the form (0,0,…, 0, ω), where 0 ≠ω∈ℚ(κ_1,…, κ_r). Now consider the evaluation at κ=κ^*. By (<ref>), the matrix N = N|_κ=κ^* has (full) rank n, so ω |_κ=κ^* is nonzero. However, this implies that positive steady states of (G,κ^*) satisfy ω|_κ=κ^* m_n+1 = 0, much like in (<ref>). Thus, (G,κ^*) has no positive steady states, which is a contradiction, and hence our claim holds.
Next, as [B_n+1] is nonzero, we can apply a version of Cramer's rule
to row-reduce N to the following matrix (where I_n denotes the size-n identity matrix):
N' = [
[ (-1)^n-1[B_1][B_n+1]; I_n (-1)^n-2[B_2][B_n+1]; ⋮; (-1)^0[B_n][B_n+1]; ]] .
Thus, as in (<ref>), the positive steady states are the
positive roots of the equations m_i - β_i m_n+1=0 (for i=1,2,…,n), where:
β_i := (-1)^n-i+1[B_i]/[B_n+1] for i=1,2,…,n .
Thus, β_i|_κ= κ^* >0 (for all i=1,2,…,n), since (G,κ^*) admits a positive steady state. We conclude from this fact, plus the claim proven earlier (namely, that [B_ℓ] ≠ 0 for all ℓ), that the following is an open subset of ℝ^r_>0 that contains κ^*:
Σ := {κ̅∈ℝ^r_>0 :
β_1|_κ= κ̅ >0, …, β_n|_κ= κ̅ >0,
[B_1]|_κ=κ̅≠ 0, …, [B_n+1]|_κ= κ̅≠ 0 }.
For the rest of the proof, we restrict our attention to rate constants, like κ^*, that are in Σ. For such rate constants, like in (<ref>–<ref>),
the positive steady states are the roots of the following equation
A
[ ln(x_1); ln(x_2); ⋮; ln(x_n) ] = [ ln(β_1); ln(β_2); ⋮; ln(β_n) ] =: ln (β) .
Next, as rank(A) ≤ n-1, there exists a nonzero vector γ∈ℝ^n
in the orthogonal complement of the column space of A.
By relabeling the m_i's (which permutes the columns of N), if needed,
we may assume that γ_1 ≠ 0. By construction of γ and equation (<ref>), we have ⟨γ, ln(β) ⟩ = 0, which is readily rewritten as follows:
( [B_1]/[B_n+1])^γ_1…((-1)^k+1[B_k]/ [B_n+1] )^γ_k…((-1)^n+1 [B_n]/ [B_n+1] )^γ_n = 1 .
For ε>0, let κ^*_ϵ denote the vector of rate constants obtained from κ^* by scaling by (1+ ε) all rate constants of reactions in which the reactant generates the monomial m_1.
As Σ is an open set, κ^*_ϵ∈Σ for ε sufficiently small.
Also, by construction, the matrix N |_κ = κ^*_ε is obtained from
N |_κ = κ^* by scaling the first column by (1+ ε).
So, for 2 ≤ i ≤ n+1, we have [B_i] |_κ = κ^*_ε = (1+ ϵ) [B_i] |_κ = κ^*.
Thus, by replacing κ^* by κ^*_ε, the left-hand side of equation (<ref>) is scaled by
(1+ε)^-γ_1, and so there exists ε>0 for which equation (<ref>) does not hold (when evaluated at κ = κ^*_ε).
Hence, this vector κ^*_ε yields a mass-action system (G,κ^*_ε) with no positive steady states, as desired.
Proposition <ref> implies that for networks with at least n reactant complexes (where n is the number of species), some choice of rate constants yields a matrix N with (full) rank n.
Our next result shows that when this condition holds (even for networks with fewer reactants),
every species appears in at least one reactant complex.
We introduce the following shorthand (which we use in several of the next results): a complex
y_ℓ 1X_1+ y_ℓ 2X_2 + … + y_ℓ nX_n involves species X_i if y_ℓ i≠ 0. For instance, X_1+X_2 involves X_2, but X_1+X_3 does not.
Let G be a full-dimensional reaction
network with n species, let κ^* be a vector of positive rate constants, and let N be the matrix for (G,κ^*), as in (<ref>). If rank(N) = n and (G,κ^*) has a positive steady state, then
for every species X_i, at least one reactant complex of G involves X_i.
We prove the contrapositive. Assume that there is a species X_i such that for every reactant complex a_1X_1+ a_2 X_2 + … +a_nX_n we have a_i = 0. Then, by Lemma <ref>, the right-hand side of the mass-action ODE for X_i, which we denote by f_i, is a sum of monomials, all of which have positive coefficients. But (G,κ^*) has a positive steady state, so f_i must be 0. We conclude that the i-th row (of the n rows) of N is the zero row and so rank(N) ≤ n-1.
§.§ Networks with conservation laws
The following result is similar to several results in the prior subsection, but pertains to networks that are not full-dimensional.
Let G be a reaction network with n ≥ 3 species.
Assume that G is (n-k)-dimensional, where k ≥ 1 (so, G has k conservation laws).
If G has exactly j reactant complexes, for some j ∈{2,3,…, n-k},
then every positive steady state (of every mass-action system defined by G) is degenerate.
We mimic the proofs of Propositions <ref>(1) and <ref>.
Let κ^* be a vector of positive rate constants. Let N be an (n × j) matrix defined, as in (<ref>), by (G,κ^*):
[ dx_1/dt; ⋮; dx_n/dt; ] =
N
[ m_1; ⋮; m_j; ] =: [ f_1; ⋮; f_n; ]
,
where
m_1, …, m_j are distinct monic monomials in x_1, …, x_n.
We consider two cases. First assume that rank(N) ≤ n-k-1. Then the polynomials f_i span a subspace of dimension ≤ n-k-1 and hence
the Jacobian matrix – even before evaluating at a positive steady state – has rank ≤ n-k-1. Every positive steady state is therefore degenerate.
Consider the remaining case: rank(N) = n-k (so, j=n-k).
In this case, multiplication by N
defines an injective map ℝ^n-k→ℝ^n.
Hence, by (<ref>), the steady-state equations f_1= … = f_n=0
imply the monomial equations m_1=…= m_j=0. Thus, there are no positive steady states.
The next result concerns networks with n-1 conservation laws, that is, one-dimensional networks.
Let G be a one-dimensional reaction network, and let κ^* be a vector of positive rate constants.
If (G,κ^*) has ACR, then (G,κ^*) is not nondegenerately multistationary.
Assume that G
is one-dimensional, with n species.
Thus, G has n-1 linearly independent conservation laws.
Let κ^* be a vector of positive rate constants for which there is ACR.
We may assume that the ACR species is X_1 (by relabeling species, if needed). Let f_1,…, f_n denote the right-hand sides of the mass-action ODEs arising from (G,κ^*).
Let x^*=(x_1^*, …, x_n^*) denote an arbitrary positive steady state of (G,κ^*). (The ACR-value is x^*_1.)
Let P_x^* denote the (one-dimensional) stoichiometric compatibility class that contains x^*.
It suffices to show that (1) x^* is the unique positive steady state in P_x^* or (2) x^* is degenerate.
We consider two cases.
Case (a): X_1 is not a catalyst-only species (in some reaction of G).
This implies that f_2,…, f_n are all scalar multiples of f_1, and that
the compatibility class
P_x^* is defined by n-1 conservation laws of the form
x_j=a_j x_1+b_j, where a_j,b_j ∈ℝ, for j∈{2,3,…, n}.
By substituting these n-1 relations into f_1, we obtain a univariate polynomial in x_1, which we denote by h. If h has multiple positive roots, then there is no ACR, which is a contradiction.
If, on the other hand, h does not have multiple positive roots, then
P_x^* does not contain multiple positive steady states (that is, x^* is the unique positive steady state in P_x^*).
Case (b): X_1 is a catalyst-only species in all reactions of G.
In this case, f_1=0, and x_1=x_1^* is a conservation law of G, and it is one of the defining equations of the compatibility class
P_x^*. By relabeling species X_2,…, X_n, if needed, we may assume that
X_2 is not a catalyst-only species (as G is one-dimensional).
Thus, we can “extend” the conservation law x_1=T to a “basis” of n-1 conservation laws
that define the compatibility class
P_x^*,
by appending n-2 conservation laws of the form
x_j=a_jx_2+b_j, where a_j,b_j ∈ℝ, for j∈{3,4,…, n}.
Next, we substitute these n-2 conservation relations into f_2, which yields a polynomial in x_1 and x_2, which we denote by g. Consider the following set, which is the positive variety of g in ℝ^2_>0 (the values of x_3,…, x_n are free, so we ignore them):
Σ := {x∈ℝ^2_>0| g(x_1,x_2)=0 }.
By construction and the fact that there is ACR in X_1, the set Σ is contained in the hyperplane (line) x_1=x_1^*, and so is either one-dimensional or zero-dimensional.
We consider these two subcases separately.
First, assume that Σ is one-dimensional. In this subcase, Σ equals the subset of the hyperplane x_1=x_1^* in the positive quadrant ℝ^2_>0, and so the compatibility class
P_x^* consists entirely of positive steady states. The Inverse Function Theorem now implies that every positive steady state of
P_x^* (in particular,
x^*) is degenerate.
Consider the remaining subcase, in which Σ is zero-dimensional
(that is, Σ consists of finitely many points).
It follows that g is either non-negative on ℝ^2_>0 or non-positive on ℝ^2_>0, and so f_2 is either non-negative on P_x^* or non-positive on P_x^*.
Consequently, as every f_i is a scalar multiple of f_2,
the steady state x^* is degenerate.
§.§ Bimolecular networks
We begin this subsection with a result that clarifies how the polynomials arising in mass-action ODEs are constrained when the network is bimolecular.
Consider a bimolecular mass-action system (G,κ^*) with n species.
Let f_i be the right-hand side of the mass-action ODE for species X_i (for some 1 ≤ i ≤ n).
Fix positive values a_j >0 for all j ∈{1,2,…, n}∖{i}.
Let g_i denote the univariate polynomial obtained by evaluating f_i at x_j=a_j for all j ∈{1,2,…, n}∖{i}. If the polynomial g_i is nonzero, then g_i has at most one sign change and hence has at most one positive root.
Let g_i denote the nonzero polynomial obtained by evaluating f_i at x_j=a_j for all j≠ i.
Several properties of g_i arise from the fact that G is bimolecular:
(1) (g_i) ≤ 2, (2) the coefficient of x_i^2 is non-positive, and (3) the constant coefficient is non-negative. Thus, g_i has at most one sign change, and so Descartes' rule of signs implies that g_i has at most one positive root.
The next two results
pertain to bimolecular mass-action systems in which the right-hand side of some ODE vanishes (Propositions <ref>) or vanishes when evaluated at an ACR-value (Proposition <ref>). We motivate these results through the following example.
[Enlarged Shinar-Feinberg network]
A common way to construct a network with an ACR species (e.g., A) is through the existence of an f_i that becomes zero when we substitute the ACR-value in place of the species. We illustrate this idea through the following network:
G = {A+B 2B , B A, 0 B+C 2B, 0C} .
This network is constructed from a well-studied network first introduced by Shinar and Feinberg <cit.>
by adding three reactions involving a new species (C).
We examine the mass-action ODE for B:
dx_2/dt = κ_1x_1x_2-κ_2 x_2 - κ_3 x_2x_3+ κ_4 x_2x_3 = x_2(κ_1 x_1-κ_2) + x_2x_3(-κ_3+κ_4)
= g+h =: f_2 ,
where g:= x_2(κ_1 x_1-κ_2) (which is the right-hand side of the ODE for X_2 in the original Shinar-Feinberg network) and
h:= x_2x_3(-κ_3+κ_4) (arising from the additional reactions, involving X_3).
Assume κ_3=κ_4. It is easy to check that (G,κ) has
a positive steady state and also has
ACR in species X_1 with ACR-value α = κ_2/κ_1. Also, observe that f_2|_x_1=α=0, as a result of the equalities g|_x_1=α=0 and h=0 (which is due to the equality κ_3=κ_4).
The next two results characterize which reactions can exist in such a situation. More precisely:
* Proposition <ref> gives conditions that hold when a mass-action ODE is zero (effectively characterizing what reactions
can yield h=0 in this case).
* Proposition <ref> gives conditions that hold when a mass-action ODE is zero when evaluated at the ACR-value (effectively characterizing what reactions
can
yield f_2|_x_1=α=0 in this case, involving a decomposition like the one we observed above: f_2|_x_1=α=g|_x_1=α +h).
The next result uses the following notation:
[Empty complex]
We introduce the dummy variable X_0:=0, so that (for instance) X_0 is the empty complex and X_i+X_0 := X_i for any species X_i.
The following result clarifies which reactions can exist if some mass-action ODE is zero.
Let G be a bimolecular reaction network with n species X_1, X_2, …, X_n. Fix 1 ≤ i ≤ n.
Let κ^* denote a vector of positive rate constants for G,
and
let f_i denote the right-hand side of the mass-action ODE for species X_i in the system (G, κ^*). If f_i is the zero polynomial, then
the set of reactions of G in which X_i is a non-catalyst-only species is a (possibly empty) subset of the reactions listed here (where our use of X_0 follows Notation <ref>):
* the reactions of the form X_i +X_j → 2X_i
(and we denote the rate constant by κ^*_1,j), where j∈{0,1,… n}∖{i},
* the reactions of the form X_i + X_j →⋆ (with rate constant κ^*_2,j,ℓ, where ℓ is an index for such reactions), where j∈{0,1,… n}∖{i} and ⋆ is any complex that does not involve X_i,
and, additionally, the following relationships among the rate constants hold:
κ^*_1,j = ∑_ℓκ^*_2,j,ℓ for all j∈{0,1,… n}∖{i} ,
where a rate constant is set to 0 if the corresponding reaction is not in G.
Let κ^* be a vector of positive rate constants for a bimolecular network G with n species,
and let f_i be the right-hand side of (G,κ^*) for the species X_i.
Let Σ denote the set of reactions of G in which X_i is a non-catalyst-only species. Reactions not in Σ do not contribute to f_i, so we ignore them for the rest of the proof.
We claim that for all reactions in Σ, the reactant complex is not one of the following 5 types: 0, X_j, X_j + X_j', 2X_j, 2X_i for any j,j' ∈{1,2,…, n }∖{i}. Indeed, any of the first 4 types of complexes would yield a constant term in f_i (when viewed as a polynomial in x_i) consisting of a sum of monomials with positive coefficients; similarly, the last type (2X_i) would yield a negative x_i^2 term (the fact that G is bimolecular is used here). However, f_i is zero, so the claim holds.
It follows that, for every reaction in Σ, the reactant complex either is X_i or has the form X_i+X_j for some j ∈{1,2,…, n }∖{i}. It is straightforward to check that all possible such reactions (in which X_i is a non-catalyst-only species) are listed in the proposition. Next, reactions of type (1) in the proposition contribute positively to f_i, while those of type (2) contribute negatively, as follows:
f_i = (
κ^*_1,0 - ∑_ℓκ^*_2, 0, ℓ) x_i
+
∑_j∈{1,2,…, n }∖{i}(
κ^*_1,j - ∑_ℓκ^*_2,j,ℓ)
x_i x_j .
As f_i=0, the coefficient of x_i and the coefficient of each x_ij in (<ref>) must be 0, which yields the desired equalities (<ref>).
Proposition <ref> concerns general (bimolecular) mass-action systems, and now we consider those with ACR. The next result characterizes which reactions can exist if some mass-action ODE becomes zero when evaluated at the ACR-value.
Let G be a bimolecular reaction network with species X_1, X_2, …, X_n, where n ≥ 2.
Let κ^* denote a vector of positive rate constants.
Assume that the mass-action system (G, κ^*) has ACR in species X_1 with ACR-value α >0.
Fix 2 ≤ i ≤ n.
Let f_i denote the right-hand side of the mass-action ODE for species X_i in the system (G, κ^*).
If f_i≠ 0 and f_i|_x_1=α is the zero polynomial, then
the set of reactions of G in which X_i is a non-catalyst-only species
is a nonempty subset of the following reactions (the same as the ones in Proposition <ref>):
* the reactions of the form X_i +X_j → 2X_i
(and we denote the rate constant by κ^*_1,j), where j∈{0,1,… n}∖{i},
* the reactions of the form X_i + X_j →⋆ (with rate constant κ^*_2,j,ℓ, where ℓ is an index for such reactions), where j∈{0,1,… n}∖{i} and ⋆ is any complex that does not involve X_i.
Additionally, the following
relationship between the ACR-value α and the rate constants holds:
α = ( ∑_ℓκ^*_2,0, ℓ) - κ^*_1,0/κ^*_1,1 - ( ∑_ℓκ^*_2,1,ℓ) .
In particular, the numerator and denominator of (<ref>) are nonzero.
Finally, if n≥ 3, then the following
relationships among the rate constants hold:
κ^*_1,j = ∑_ℓκ^*_2,j,ℓ for all j∈{2,3,… n}∖{i} .
(In equations (<ref>)–(<ref>), a rate constant is set to 0 if the corresponding reaction is not in G.)
Assume that f_i is nonzero, but f_i|_x_1=α is zero. Using properties of polynomial rings over a field, it follows that (x_1- α) divides f_i.
From the fact that G is bimolecular, we conclude that:
f_i = (x_1 - α) ( β x_i + γ + ∑_j ∈ [n] ∖{i}δ_j x_j )
= β x_1 x_i
+ γ x_1
+ ( ∑_j ∈ [n] ∖{i}δ_j x_1 x_j )
-αβ x_i
-αγ
- ( ∑_j ∈ [n] ∖{i}αδ_j x_j ) ,
for some real numbers β, γ, δ_j, at least one of which is nonzero.
In the right-hand side of (<ref>), the variable x_i does not appear in any of the following monomials (here the hypothesis i≠ 1 is used):
γ x_1, -αγ, δ_j x_1 x_j, -αδ_j x_j ,
for
j ∈{1,2,…,n}∖{i},
so Lemma <ref> implies that the coefficients of these monomials must be non-negative.
Since α>0, we conclude that γ=0 and δ_j=0
(for all j ∈{1,2,…,n}∖{i}).
Thus, using (<ref>), we have
f_i= β x_1 x_i - αβ x_i, for some β∈ℝ∖{0}. Next, we investigate which reactions contribute to the two monomials in f_i.
For β x_1 x_i, the contributing reactions
have the form X_1+X_i→ 2X_i and X_1 + X_i →⋆, where ⋆ does not involve X_i. The first reaction contributes positively,
while the second type contributes negatively.
Let κ^*_1,1 be the reaction rate constant for X_1+X_i→ 2X_i (as in the statement of the lemma) and κ^*_2,1,ℓ be the rate constant for reactions of type X_1 + X_i →⋆, where ℓ is an index for all the reactions of this type. We conclude that
κ^*_1,1 - ∑_ℓκ^*_2,1,ℓ = β .
Similarly, the monomial -αβ x_i in f_i comes from reactions of the form X_i → 2X_i, which contributes positively,
and X_i →⋆, which contributes negatively, where ⋆ is a complex that does not involve X_i.
Hence,
κ^*_1,0 -
∑_ℓκ^*_2,0,ℓ = - αβ .
Now the
equations (<ref>)
and (<ref>) together imply
the desired equality (<ref>).
Next, let Σ denote the set of reactions of G in which X_i is a non-catalyst-only species. We showed above that Σ contains a (nonempty) subset of reactions with rate constants labeled by κ^*_1,0, κ^*_1,1, κ^*_2,0,ℓ, κ^*_2,1,ℓ. Let Σ' ⊆Σ denote the remaining reactions, and let G' denote the subnetwork defined by the reactions in Σ'.
Let κ' be obtained from κ^* by restricting to coordinates corresponding to reactions in Σ'.
By construction, the mass-action ODE of (G', κ') for species X_i has right-hand side equal to 0.
So,
Proposition <ref> applies (where reactions arising from j=0,1 in that proposition are absent from G' by construction),
and yields two conclusions.
First,
Σ' is a subset of the reactions listed in Proposition <ref> (specifically, with j ≠ 0,1), and so Σ is a subset of the full list (including j = 0,1).
Second,
the equations (<ref>) hold (for j ≠ 0,1), which are the desired equalities (<ref>).
The reactions listed in
Propositions <ref> and <ref> (the lists are the same) are not reversible. Hence, if G is a reversible network satisfying the hypotheses of either proposition, then X_i is a catalyst-only species in every reaction of G.
[<ref>]
We revisit the enlarged Shinar-Feinberg network, G=
{A+B 2B , B A, 0 B+C 2B, 0C}.
Recall that, when κ_3=κ_4, the mass-action system (G,κ) has ACR in X_1 with ACR-value α= κ_2/κ_1, and that f_2|_x_1=α=0. In the notation of Proposition <ref>, the rate constants of reactions in which X_2 is non-catalyst-only are:
κ^*_1,1=κ_1, κ^*_1,0=κ_2, κ^*_2,3,1=κ_3, κ^*_1,3=κ_4 .
Now the formula in Proposition <ref> for the ACR-value (<ref>) exactly yields the ACR-value computed earlier: α= κ_2/κ_1, and
the relationship among rate constants (<ref>) recapitulates κ_3=κ_4.
The formula for the ACR-value, in (<ref>), is related to the concept of “robust ratio” introduced by Johnston and Tonello <cit.>.
§.§ Three reversible reactions are necessary for multistationarity
Recall that, in (<ref>), we saw an instance of a (nondegenerately) multistationary, bimolecular network that consists of 3 pairs of reversible reactions. In this subsection, we prove that bimolecular networks with fewer pairs of reversible reactions are non-multistationary
(Theorem <ref>).
Our proof of Theorem <ref>
requires
several
supporting lemmas on one-dimensional networks.
For the next lemma, recall from Section <ref> that cap_pos(G)
(respectively, cap_nondeg(G))
denotes the maximum possible number of positive (respectively, nondegenerate and positive) steady states of a network G. In Lemma <ref> below, part (1) was conjectured by Joshi and Shiu <cit.> and then proved by Lin, Tang, and Zhang <cit.> (see also <cit.>).
Part (2) is due to Tang and Zhang <cit.>.
Let G be a one-dimensional reaction network.
* If G is multistationary and cap_pos(G) < ∞, then G has an embedded one-species network with arrow diagram (←, →) and another with arrow diagram (→, ←).
* If cap_pos(G) < ∞, then cap_nondeg(G) = cap_pos(G).
Joshi and Shiu showed that the network G={ 0 ← A → 2A} is the only one-species, bimolecular network for which cap_pos(G) =∞ <cit.>.
The following lemma generalizes this result from one-species networks to one-dimensional networks.
Let G be a one-dimensional and bimolecular reaction network with n species. The following are equivalent:
* cap_pos(G) = ∞.
* Up to relabeling species, G is one of the following networks:
* {2X_1 ← X_1+X_2 → 2X_2 },
* {X_1 → 2X_1}∪Σ, where Σ consists of at least one reaction from the following set:
{0 ← X_1 }∪{X_i ← X_1 + X_i | i=2,3,…, n} .
Additionally, for the networks listed above in 2(a) and 2(b),
every positive steady state (of every mass-action system arising from the network G) is degenerate.
Let G be a one-dimensional, bimolecular reaction network.
Up to relabeling species, the one-dimensional stoichiometric subspace is spanned by one of the following seven vectors:
(1,0,0,…, 0) , (1,-1,0,0,…, 0) ,
(1,1,0,0,…, 0) ,
(1,-2, 0,0,…, 0) ,
(1,1,-1,0,0,…, 0) ,
(1,1,-2, 0,0,…, 0) ,
(1,1,-1,-1, 0,0,…, 0) .
We first consider the case when the stoichiometric subspace is spanned by one of the five vectors listed in (<ref>). The network G is then a subnetwork of one of the following networks (where we use A,B,C,D in place of X_1,X_2,X_3,X_4 for ease of notation);
{0 ⇆ A+B} , {A ⇆ 2B} , {A+B ⇆ C} , {A+B ⇆ 2C} , {A+B ⇆ C+D} .
A direct calculation shows that the deficiency of G is 0, so the deficiency-zero theorem (Lemma <ref>)
implies that G is not multistationary. In particular, cap_pos(G) < ∞.
Having shown that the case of (<ref>)
is consistent with Lemma <ref>, we now consider the remaining two cases, from (<ref>), separately.
First, assume the stoichiometric subspace of G is spanned by (1,0,0,…, 0). It follows that the reactions of G form a subset of the following 2n+4 reactions:
0 [k_0]m_0⇆ X_1
[m_1]ℓ_0⇆ 2X_1
0 [k_1]ℓ_1⇆ 2X_1
X_i [k_i]m_i⇆ X_1 + X_i for i=2,3,…, n .
The ODEs for species X_2, X_3, …, X_n are dx_i/dt=0 so, x_i=T_i (with T_i>0) for i=2,3,…,n are the corresponding conservation laws. We substitute these conservation laws into the ODE for X_1:
d x_1/dt|_x_2=T_2,…, x_n=T_n
=
(k_0 + 2 k_1)
+
(k_2T_2 + … + k_nT_n)
+m_1 x_1
- (m_0+ m_2T_2 + … + m_nT_n) x_1
- (ℓ_0 + 2 ℓ_1) x_1^2 .
When at least one k_i is positive and all other k_j's are non-negative, the right-hand side of (<ref>), viewed as a polynomial in x_1, has a nonzero constant term and, hence, is not the zero polynomial. Similarly, if ℓ_0 or ℓ_1 is positive and ℓ_0,ℓ_1 ≥ 0, then the right-hand side of (<ref>) has a nonzero coefficient of x_1^2 and is again a nonzero polynomial. We conclude that if G contains at least one of the reactions labeled by k_i or ℓ_i, then cap_pos(G) < ∞, which is consistent with Lemma <ref>.
We now consider the case when G contains no reactions labeled by k_i or ℓ_i, that is, every reaction of G is one of the following n+1 reactions:
0 m_0← X_1
m_1→ 2X_1
X_i m_i← X_1 + X_i for i=2,3,…, n .
The right-hand side of the ODE for X_1, as in (<ref>), becomes x_1(m_1 - m_0 - m_2T_2 - … - m_nT_n). In order for this polynomial in x_1 to become the zero polynomial for some choice of positive rate constants of G (equivalently, cap_pos(G) = ∞), we must have m_1>0 and m_j>0 for at least one of j=0,2,3,…,n. This gives exactly
the reactions listed in Lemma <ref>(2)(b). In this case, given m_j>0 for the reactions appearing in the network, we can always choose T_j>0, such that
the right-hand side of the ODE for X_1 vanishes (i.e., cap_pos(G) = ∞). Moreover,
when this right-hand side vanishes is the only situation in which there are positive steady states, and
an easy calculation shows that all such positive steady states are degenerate. This concludes our analysis of networks with stoichiometric subspace spanned by the vector (1,0,0,…, 0).
Our final case is when the stoichiometric subspace is spanned by the vector (1,-1,0,…, 0). In this case, the
reactions of G form a subset of the following 2n+4 reactions:
X_1 [k_1]ℓ_1⇆ X_2
2 X_1 [k_2]ℓ_2⇆ 2X_2
2 X_1 [k_3]m_1⇆ X_1 + X_2 [m_2]ℓ_3⇆ 2 X_2
X_1 + X_i [k_i+1]ℓ_i+1⇆ X_2 + X_i for i=3,4,…, n .
The conservation laws are x_1+x_2=T_2 and x_i=T_i for i=3,4,…, n. The ODE for species X_1 is:
dx_1/dt =
-(2k_2+k_3) x_1^2
-k_1x_1
-(k_4 x_3 + … + k_n+1x_n) x_1
+ (m_1-m_2) x_1 x_2
+ (ℓ_1 x_2 + 2 ℓ_2 x_2^2 + ℓ_3 x_2^2)
+ (ℓ_4 x_3 + … + ℓ_n+1 x_n) x_2 .
Consider the subcase when at least one of the ℓ_i is positive and all other ℓ_j's are non-negative. After substituting the expressions arising from the conservation laws (namely, x_2= T_2-x_1 and x_i=T_i for i=3,4,…, n) into the right-hand side of the ODE (<ref>), we obtain a polynomial in x_1 that has a positive constant term (see
the second line of the right-hand side of (<ref>)).
Hence, if G contains at least one of the reactions labeled by ℓ_i, then cap_pos(G) < ∞.
By symmetry, if G has at least one of the reactions labeled by k_i, then again cap_pos(G) < ∞. Hence, if G contains a reaction labeled by ℓ_i or k_i, then this subcase is consistent with the lemma.
Consider the remaining subcase, when G is a subnetwork of
{
2 X_1 m_1← X_1 + X_2 m_2→ 2 X_2
}, and so consists of only one or two reactions.
If G has only one reaction, then
Proposition <ref> implies that
cap_pos(G) = 0 < ∞ (which is consistent with the lemma).
Now assume that
G has two reactions, that is,
G={
2 X_1 m_1← X_1 + X_2 m_2→ 2 X_2
}.
If m_1 ≠ m_2, then the ODE for X_1 is dx_1/dt= (m_1 - m_2) x_1 x_2 and so there are no positive steady states. When m_1=m_2, the ODE for X_1 becomes dx_1/dt=0 and it follows that
cap_pos(G) = ∞. Moreover, a simple computation shows that all the positive steady states are degenerate. This concludes the proof.
[<ref>]
The network { 0 ← A → 2A , B ← A+B} is one of the networks listed in Lemma <ref>.2(b), where n=2.
If G is a one-dimensional, bimolecular network, then G is not nondegenerately multistationary.
Assume that G is a one-dimensional network that is nondegenerately multistationary. We must show that G is not bimolecular. We claim that cap_pos(G) is finite. Indeed, if cap_pos(G) = ∞, then Lemma <ref> implies that all positive steady states are degenerate and so G is not nondegenerately multistationary, which is a contradiction. Hence, cap_pos(G) < ∞.
The hypotheses of part (1) of
Lemma <ref>
are satisfied, that is, cap_pos(G) < ∞, and G is one-dimensional and multistationary. Therefore, G has an embedded one-species network with arrow diagram (←, →). Such an embedded network (e.g., {0 ← A, 2A → 3A}) involves at least one complex that is not bimolecular, and so G is also not bimolecular.
If G is a bimolecular reaction network that consists of one or two pairs of reversible reactions, then G is not multistationary.
Assume that G is bimolecular and consists of one or two pairs of reversible reactions.
Let p denote the number of pairs of reversible reactions (so, p=1 or p=2), and ℓ the number of linkage classes. Let s be the dimension of the stoichiometric subspace (so, s=1 or s=2).
Case 1: p=1.
The deficiency of G is δ = 2-1-1=0 and G is weakly reversible. Hence, by the
deficiency-zero theorem (Lemma <ref>) the network is not multistationary.
Case 2: p=s=2.
If ℓ=1, then the deficiency is δ = 3-1-2=0. If ℓ=2, then the deficiency is δ=4-2-2=0. Therefore, for either value of ℓ, the deficiency-zero theorem (Lemma <ref>)
implies that the network is not multistationary.
Case 3: p=2 and s=1.
G is one-dimensional, bimolecular, and reversible.
So, Lemma <ref>
implies that cap_pos(G) < ∞. Now Lemma <ref>(2) yields cap_pos(G) = cap_nondeg(G),
and Lemma <ref> implies that cap_nondeg(G) ≤ 1. Thus, cap_pos(G) ≤ 1, or, equivalently, G is non-multistationary.
§ MAIN RESULTS ON BIMOLECULAR NETWORKS
In this section, we establish minimal conditions for a bimolecular network to admit ACR and nondegenerate multistationarity simultaneously.
These minimal conditions are
in terms of the numbers of species, reactions, and reactant complexes. The main result is as follows.
Let G be a bimolecular reaction network. If there exists a vector of positive rate constants κ^* such that the mass-action system (G,κ^*) has ACR and also is nondegenerately multistationary, then:
* G has at least 3 species.
* G has at least 3 reactant complexes (and hence at least 3 reactions) and at least 5 complexes (reactant and product complexes).
* If G is full-dimensional, then G has at least 5 reactant complexes (and hence at least 5 reactions).
This section is structured as follows.
In Subsection <ref>,
we prove part (1) of Theorem <ref>
(specifically, part (1) follows from Proposition <ref> and Theorem <ref>).
Theorem <ref> also analyzes
two-species bimolecular networks with ACR and degenerate multistationarity.
Additionally, we
characterize unconditional ACR in two-species bimolecular networks that are reversible (Theorem <ref>).
Subsequently,
in Subsection <ref>,
we prove parts (2) and (3) of Theorem <ref> (Theorem <ref> and Proposition <ref>).
We also consider full-dimensional, 3-species, bimolecular networks with only 4 reactant complexes. By Theorem <ref>, such networks do not allow for the coexistence of ACR and nondegenerate multistationary. Nevertheless, ACR and degenerate multistationarity is possible, and we characterize the possible sets of reactant complexes of such networks
(Proposition <ref>).
§.§ Bimolecular networks with one or two species
This subsection characterizes unconditional ACR in reversible networks with only one or two species
(Proposition <ref> and Theorem <ref>).
Notably, our results show that such networks with unconditional ACR are not multistationary.
Our interest in reversible networks comes from our prior work with Joshi <cit.>. In that article, our results on multistationarity in randomly generated reaction networks arise from “lifting” this property from the following (multistationary) motif:
{
B ⇆ 0 ⇆ A ⇆ B+C , C ⇆ 2C
} .
The question arises, Are there multistationary motifs with fewer species, reactions, or complexes than the one in (<ref>)?
Discovering more motifs might aid in analyzing the prevalence of multistationarity in random reaction networks generated by stochastic models besides the one in <cit.>.
§.§.§ Networks with one species
When there is only one species, say X_1,
and the network is bimolecular,
there are only 3 possible complexes: 0, X_1, 2X_1.
Hence, every such network is a subnetwork of the following network:
G_X_1 = {0⇆ X_1⇆ 2X_1 ⇆ 0}.
Therefore, the possible reversible networks, besides G_X_1 itself, are listed here:
{0⇆ X_1}, {X_1⇆ 2X_1}, {0⇆ 2X_1}, {0⇆ X_1⇆ 2X_1}, {X_1⇆ 0⇆ 2X_1}, {0⇆ 2X_1⇆ X_1} .
Every bimolecular network in only one species
is not nondegenerately multistationary.
Every reversible, bimolecular network in only one species
has unconditional ACR.
Let G be a bimolecular network with only one species. Then G is a subnetwork of G_X_1, in (<ref>), and the first part of the proposition now follows readily from Lemmas <ref>–<ref>.
Next, assume G is a reversible, bimolecular network with only one species. Then G is either the network G_X_1 or one of the networks listed in (<ref>).
Each of these networks is weakly reversible and satisfies the conditions of either the
deficiency-zero
or deficiency-one theorem
(Lemmas <ref>–<ref>).
Thus, for every choice of positive rate constants κ, the mass-action system (G, κ) has a unique positive steady state. Hence,
G has unconditional ACR.
§.§.§ Reversible networks with two species
We now consider reversible, bimolecular networks with two species.
Among such networks, the ones with unconditional ACR are characterized in the following result, which is the main result of this subsection.
Let G be a reversible, bimolecular reaction network with exactly two species (and at least one reaction).
* If G is full-dimensional, then the following are equivalent:
* G has unconditional ACR;
* G is not multistationary.
* If G is one-dimensional, then the following are equivalent:
* G has unconditional ACR;
* Up to relabeling species, G is the (non-multistationary) network {X_2 ⇆ X_1+X_2}.
Theorem <ref> encompasses Propositions <ref> and <ref> below.
Let G be a full-dimensional, reversible, bimolecular reaction network with exactly two species.
Then the following are equivalent:
* G has unconditional ACR;
* G is not multistationary.
Let G
be a full-dimensional, reversible, bimolecular network with exactly 2 species.
We first prove (b) ⇒ (a). Assume that G is non-multistationary, and let
κ^* be a choice of positive rate constants. Then, the mass-action system (G,κ^*) admits at most one positive steady state (x^*_1, x^*_2) (here the assumption that G is full-dimensional is used).
However, the fact that G is reversible guarantees at least one positive steady state (Remark <ref>).
Hence, (G, κ^*) has a unique positive steady state (x^*_1, x^*_2) and therefore
has ACR in both species with ACR-values x_1^* and x_2^*, respectively. So, G has unconditional ACR.
Next, we prove (a) ⇒ (b). Assume that G has unconditional ACR. Let κ^* be a choice of positive rate constants. By relabeling species, if necessary, we may assume that the system (G, κ^*) has ACR in species X_1 with some ACR-value α>0. Every positive steady state of (G, κ^*), therefore, has the form (α, x_2^*), where x^*_2 ∈ℝ_>0. We must show that there is at most one such steady state.
Write the mass-action ODEs of (G, κ^*) as dx_1/dt = f_1 and dx_2/dt=f_2. Consider the univariate polynomial f_2|_x_1 = α∈ℝ[x_2]. We claim that this polynomial is not the zero polynomial. To check this claim, assume for contradiction that f_2|_x_1 = α is zero. As G is reversible,
Remark <ref> (which relies on
Propositions <ref>–<ref>)
implies that X_2 is a catalyst-only species of every reaction of G. We conclude that G is not full-dimensional, which is a contradiction.
Having shown that the univariate polynomial
f_2|_x_1 = α is nonzero, we now use Lemma <ref> to conclude that (G, κ^*)
has at most one positive steady state of the form (α, x_2^*).
Proposition <ref> fails for networks that are not reversible. Indeed, a network without positive steady states (such as { 0 → A, 0 → B}) is not multistationary and also lacks unconditional ACR.
We end this subsection by considering two-species networks that are one-dimensional.
Up to relabeling species, each such network is a subnetwork of exactly one of the following networks G_i:
G_1 := { 0 ⇆ X_1+X_2 }
G_2 := {X_1⇆ 2X_2}
G_3 := { 0 ⇆ X_1 ⇆ 2X_1 ⇆ 0 ,
X_2 ⇆ X_1+X_2}
G_4 := { 2X_1 ⇆ X_1+X_2 ⇆ 2X_2 ⇆ 2X_1 , X_1 ⇆ X_2 }
The next result, which is part (2) of Theorem <ref>,
states that among the reversible subnetworks of the networks G_i listed in (<ref>),
only one has unconditional ACR (namely, {X_2 ⇆ X_1+X_2}).
Let G be a one-dimensional, reversible, bimolecular reaction network with exactly two species. Then the following are equivalent:
* G has unconditional ACR;
* Up to relabeling species, G is the (non-multistationary) network {X_2 ⇆ X_1+X_2}.
Let G be a two-species, one-dimensional, reversible, bimolecular reaction network.
From the list (<ref>), we know that G is a subnetwork of one of G_1, G_2, G_3, and G_4.
Assume G is a subnetwork of G_1, G_2, or G_4. Then,
G ≠{X_2 ⇆ X_1+X_2} and G is not a subnetwork of {0 ⇆ X_1 ⇆ 2X_1 ⇆ 0}. So, it suffices to show G does not have unconditional ACR.
In networks G_1, G_2, and G_4, the reactant and product complexes of every reaction differ in both species X_1 and X_2. Also, all reactions in G are reversible, so every complex of G is a reactant complex. We conclude that G has two reactant complexes that differ in both species, and hence, Lemma <ref> implies that G does not have unconditional ACR.
We now consider the remaining case, when G is a subnetwork of G_3.
We write G_3 = N_1 ∪ N_2, where N_1:={0 ⇆ X_1 ⇆ 2X_1 ⇆ 0 } and N_2:={X_2 ⇆ X_1+X_2}.
If G=N_2, the mass-action ODEs are dx_1/dt =κ_1x_2-κ_2x_1x_2 and dx_2/dt =0, and so G has unconditional ACR in species X_1 with ACR-value κ_1κ_2.
If G is a subnetwork of N_1, then G has only one species (recall that every species of a network must take part in at least one reaction), which is a contradiction.
Our final subcase is when G contains reactions from both N_1 and N_2.
Then, from N_2, the complex X_2
is a reactant complex of G.
Similarly, from N_1, at least one of X_1 and 2X_1 is a reactant complex of G.
Hence, G contains two reactant complexes that differ in both species, X_1 and X_2.
Therefore, Lemma <ref> implies that G does not have unconditional ACR.
Finally, the fact that the network {X_2 ⇆ X_1+X_2} is
non-multistationary follows easily from the deficiency-zero theorem (Lemma <ref>).
§.§.§ Irreversible networks with two species
In <cit.>, the following network was called a “degenerate-ACR network,” because it has unconditional ACR and yet every positive steady state is degenerate:
{ A+B → B, A → 2A } .
This degeneracy arises from the fact that a single (one-dimensional) stoichiometric compatibility class consists entirely of steady states <cit.>.
The main result of this subsection,
Theorem <ref> below,
shows that only one additional
two-species network exhibits both ACR and multistationarity for a nonzero-measure set of rate constants; this network is
obtained by adding to (<ref>) the reaction A → 0. Both networks, therefore, are one-dimensional, two-species networks.
To prove Theorem <ref>, we need the following lemma, which concerns the network in (<ref>) (and others as well).
Let G be a subnetwork of the network {X_1+X_2 → X_2, 0 ← X_1 → 2X_1}. Then:
* Every positive steady state (of every mass-action system defined by G) is degenerate.
* Let Σ denote the
set of vectors of positive rate constants κ
for which the mass-action system (G, κ) both has ACR and is multistationary. If Σ has nonzero measure, then G is one of the following networks:
{X_1+X_2 → X_2, X_1 → 2X_1} and
{X_1+X_2 → X_2, 0 ← X_1 → 2X_1}.
This result is straightforward to check by hand, so we only outline the steps, as follows.
Assume G is a subnetwork of {X_1+X_2 k→ X_2, 0 ℓ← X_1 m→ 2X_1}. If G admits a positive steady state, G must
contain the reaction X_1 m→ 2X_1. Hence, there are three subnetworks to consider:
* If G= {0 ℓ← X_1 m→ 2X_1}, then Σ is empty.
* If G= {X_1+X_2 k→ X_2, X_1 m→ 2X_1}, then Σ = { (k,m) ∈ℝ^2_>0}.
* If G= {X_1+X_2 k→ X_2, 0 ℓ← X_1 m→ 2X_1}, then Σ = { (k,ℓ,m) ∈ℝ^3_>0| m > ℓ}.
In cases (2) and (3), the set Σ has nonzero measure. Finally, for all three of these networks, every positive steady state is degenerate
(some of these
networks are also covered by Lemma <ref>).
Let G be a bimolecular reaction network with exactly two species, X_1 and X_2.
Let Σ denote the set of vectors of positive rate constants κ
for which the mass-action system (G, κ) both has ACR in species X_2 and is multistationary. Then:
* For every κ^* ∈Σ, every positive steady state of (G, κ^*) is degenerate.
* If Σ has nonzero measure, then
G is one of the following networks:
{X_1+X_2 → X_2, X_1 → 2X_1} and
{X_1+X_2 → X_2, 0 ← X_1 → 2X_1}.
Assume that G is bimolecular and has exactly two species.
If Σ is empty (for instance, if G has no reactions), then there is nothing to prove.
Accordingly, assume that Σ is nonempty (and in particular G has at least one reaction).
We first claim that G has a reaction in which X_1 is a non-catalyst-only species. To prove this claim, assume for contradiction that X_1 is a catalyst-only species. Then the stoichiometric compatibility classes are defined by the equations x_1=T, for T>0 (we are also using the fact that G has at least one reaction). But this does not allow for multistationarity and ACR in X_2 to coexist, because two positive steady states in the same compatibility class would have the form (T,y) and (T,z), with y≠ z, which contradicts the assumption of ACR in X_2.
So, the claim holds.
For an arbitrary vector κ of positive rate constants, let f_κ,1 and f_κ,2 denote the right-hand sides (for species X_1 and X_2, respectively) of the mass-action ODE system of (G,κ).
Consider the following partition of Σ:
Σ = ( Σ∩{κ| f_κ,1= 0})
∪( Σ∩{κ| f_κ,1≠ 0})
=: Σ_0 ∪Σ_1 .
By construction, Σ_0∩Σ_1=∅. We first analyze Σ_0.
If Σ_0 is empty, then skip ahead to our analysis of Σ_1. Accordingly, assume Σ_0 is nonempty, and let κ^* ∈Σ_0. We must show that every positive steady state of (G,κ^*) is degenerate.
We claim that G is two-dimensional (assuming that Σ_0 is nonempty).
We prove this claim as follows. We saw that G contains
a reaction in which X_1 is a non-catalyst-only species, so
Proposition <ref>
implies that for j=0 or j=2 (or both, where we are using Notation <ref>)
our network
G contains the reaction
X_1+X_j → 2X_1
and at least one reaction of the form X_1+X_j →⋆, where ⋆ is a complex not involving X_1.
Consider the subcase j=0. If some ⋆ involves X_2, then G contains X_1 → 2X_1 and X_1 →⋆, which yield linearly independent reaction vectors and so G is two-dimensional. If none of the complexes ⋆ involve X_2, then
G must contain additional reactions in which X_2 is not a catalyst-only species (to avoid f_2=0), and so again G is two-dimensional.
The subcase j=2 is similar.
Next, as G is two-dimensional and f_κ^*,1= 0, Corollary <ref> implies that every positive steady state of (G,κ^*) is degenerate, as desired.
Additionally, as X_1 is a non-catalyst-only species and (for all κ∈Σ_0) f_κ,1=0,
Proposition <ref> implies that there is a nontrivial linear relation that every κ∈Σ_0 satisfies. Hence, Σ_0 has zero measure.
To complete the proof, it suffices to show the following about the set Σ_1: (1)
For every κ^* ∈Σ_1, every positive steady state of (G, κ^*) is degenerate; and (2) If Σ_1 has nonzero measure, then
G={X_1+X_2 → X_2, X_1 → 2X_1} or
G={X_1+X_2 → X_2, 0 ← X_1 → 2X_1}.
Assume Σ_1 is nonempty (otherwise, there is nothing to prove).
We introduce the following notation: for κ∈Σ_1, let β(κ) denote the ACR-value for X_2.
We now claim the following:
For every κ∈Σ_1,
the univariate polynomial f_κ,_1|_x_2=β(κ) is the zero polynomial.
To verify this claim, we first note that
f_κ,_1|_x_2=β(κ) has at least two positive roots (as (G, κ) is multistationarity),
so the polynomial f_κ,_1|_x_2=β(κ), if nonzero, must have at least two sign changes (by Descartes' rule of signs).
However, by Lemma <ref>, the polynomial f_κ,_1|_x_2=β(κ) has at most one sign change, and so the claim holds.
We now know that for every κ^* ∈Σ_1, we have f_κ^*,_1≠ 0, but f_κ^*,_1|_x_2=β(κ^*) =0. Hence,
G has at least one reaction in which X_1 is a non-catalyst-only reaction and (by Proposition <ref>) every such reaction must be one of the 8 reactions displayed here:
0 κ_4,1⟵ X_1 κ_1⟶ 2X_1
κ_2⟵ X_1+X_2 κ_3,1⟶ 0 ,
X_2 κ_4,2⟵ X_1 κ_4,3⟶ 2X_2 ,
X_2 κ_3,2⟵ X_1+X_2 κ_3,3⟶ 2X_2
For every κ^* ∈Σ_1, Proposition <ref> yields
the following ACR-value formula:
β(κ^*) = κ^*_4 ∙ - κ^*_1/κ^*_2 - κ^*_3 ∙ ,
where
κ^*_3 ∙ := κ^*_3, 1 + κ^*_3, 2 + κ^*_3, 3
and
κ^*_4 ∙:=κ^*_4, 1 + κ^*_4, 2 + κ^*_4, 3.
For reactions in (<ref>) that are not in G, the corresponding rate constants, κ^*_i or κ^*_ij, are set to 0.
Next, the possible reactions in which X_1 is a catalyst-only species are as follows:
0 κ_5κ_6⇄ X_2
κ_7κ_8⇄ 2X_2
κ_9κ_10⇄ 0 ,
X_1 κ_11κ_12⇄ X_1+ X_2
We proceed by considering three subcases, based in part on whether f_κ,2 (which is a polynomial in the unknowns x_1, x_2, and κ) is zero:
(a) f_κ,2=0, and X_2 is a catalyst-only species in every reaction of G,
(b) f_κ,2=0, and X_2 is a non-catalyst-only species in some reaction of G, or
(c) f_κ,2≠ 0.
We first consider subcase (a).
By inspecting reactions in (<ref>) and (<ref>), we conclude that G must be a subnetwork of {X_1+X_2 → X_2, 0 ← X_1 → 2X_1}. This subcase is done by Lemma <ref>.
Next, we examine subcase (b).
Let G_1:={X_1+X_2 → X_2, 0 ← X_1 → 2X_1}, G_2:= {0← X_2 → 2X_2}, and
G_3:={0← X_1+X_2→ 2X_2, X_1 ← X_1+ X_2 → 2X_1 }.
By Proposition <ref>
(and by inspecting reactions in (<ref>) and (<ref>)), G must be a subnetwork of G_1∪ G_2 ∪ G_3 with at least one reaction in G_2 ∪ G_3.
Moreover, there is a nontrivial linear relation in the rate constants that holds for all κ∈Σ_1.
It follows that Σ_1 is contained in the hyperplane defined by this linear relation and hence has zero measure.
Let κ^* ∈Σ_1.
By examining G_1 ∪ G_2 ∪ G_3, we see that the possible reactants of G are X_1, X_2, X_1+X_2. Next, G has at least 2 reactants (as otherwise, Proposition <ref> would imply that G admits no positive steady states).
Hence, by inspection, G either is full-dimensional or is a subnetwork of { 0 ← X_2 → 2X_2, X_1+X_2 → X_1}, which we already saw in Example <ref> (where A=X_2 and B=X_1) has ACR in X_1 but not in X_2 (and the analysis of its subnetworks is similar). Hence, G is full-dimensional, and so
Corollary <ref> (and the fact that f_κ^*,2=0) implies that every positive steady state of (G,κ^*) is degenerate.
Consider subcase (c).
Let κ^* ∈Σ_1 (so, in particular, f_κ^*,2≠ 0). We claim that f_κ^*,2|_x_2=β(κ^*) = 0. To see this, observe that, in the reactions (<ref>) and (<ref>), the complex 2X_1 appears only as a product, never as a reactant.
Hence, f_κ^*,2|_x_2=β(κ^*) (which is a univariate polynomial in x_1) has degree at most 1.
However, the fact that (G,κ^*) is multistationary implies that f_κ^*,2|_x_2=β(κ^*) has two or more positive roots.
Hence, f_κ^*,2|_x_2=β(κ^*) is the zero polynomial.
Now we show that every positive steady state of (G,κ^*) is degenerate. Such a steady state has the form (p,β), and we also know that
f_κ^*,1|_x_2=β(κ^*) = f_κ^*,2|_x_2=β(κ^*)=0.
Hence, (x_2-β(κ^*)) divides both f_κ^*,1 and f_κ^*,2. Consequently, the derivatives of f_κ^*,1 and f_κ^*,2 with respect to x_1 at (p,β) are both zero. It follows that the first column of the 2 × 2 Jacobian matrix, when evaluated at (p,β), is the zero column. Hence, if the stoichiometric subspace of G, which we denote by S, is two-dimensional, then (p,β) is degenerate.
We now assume
(S)=1 (and aim to reach a contradiction).
Recall that G contains at least one reaction from those in (<ref>), so in order for (S)=1 it must be that G contains no reaction from (<ref>). Hence, from the expression for f_2 (which we know is not zero), in (<ref>), the only possible reactions in G are the ones labeled by
κ_2 , κ_3,3, κ_4,2,κ_4,3. Hence, the one-dimensional network G is either the network { X_1 κ_4,3⟶ 2X_2 } or a subnetwork of {2X_1
κ_2⟵ X_1+X_2 κ_3,3⟶ 2X_2, X_1 κ_4,2⟶ X_2 }.
Now it is straightforward to check that G is not multistationary, which is a contradiction.
To complete the proof, it suffices to show that, in subcase (c), the set Σ_1 has measure zero.
Accordingly, let κ∈Σ_1.
As noted earlier, the ACR-value of X_2 in (G,κ) is β(κ) = κ_4 ∙ - κ_1/κ_2 - κ_3 ∙.
From (<ref>) and (<ref>),
the right-hand side of the mass-action ODE for (G,κ)
has the following form (with rate constants set to 0 for reactions not in G):
f_κ,2 =
( κ_3,3 - κ_2 - κ_12) x_1 x_2
+ (κ_11 + κ_4,2 + 2 κ_4,3) x_1
- (κ_8 + 2 κ_9) x_2^2
+ (κ_7 - κ_6) x_2
+ (κ_5 + 2 κ_10) .
By assumption, at least one of the rate constants
(the κ_i and κ_i,j)
in (<ref>) is nonzero.
By our earlier arguments,
at the beginning of subcase (c), we conclude that f_κ,2|_x_2=β(κ) = 0.
Hence, the linear and constant terms of
f_κ,2|_x_2=β(κ) are both 0, which, using (<ref>), translates as follows:
( κ_3,3 - κ_2 - κ_12) κ_4 ∙ - κ_1/κ_2 - κ_3 ∙
+ (κ_11 + κ_4,2 + 2 κ_4,3)
= 0
and
- (κ_8 + 2 κ_9) ( κ_4 ∙ - κ_1/κ_2 - κ_3 ∙)^2
+ (κ_7 - κ_6) κ_4 ∙ - κ_1/κ_2 - κ_3 ∙
+ (κ_5 + 2 κ_10) = 0 .
It follows that Σ_1 is constrained by the equations (<ref>), at least one of which is nontrivial.
Hence, Σ_1 is contained in a hypersurface and so has measure zero.
§.§ Bimolecular networks with at least three species
In the previous subsection, we showed that a bimolecular network must have at least 3 species in order for ACR and nondegenerate multistationarity to coexist. Consequently, this subsection focuses on bimolecular networks with
at least 3 species.
We prove that the coexistence of ACR and nondegenerate multistationarity requires a minimum of 3 reactant complexes and a minimum of 5 complexes (Theorem <ref>).
The remainder of this subsection focuses on a family of networks with n species and n reactants,
for which ACR and nondegenerate multistationarity coexist
(Section <ref>), and then analyzes full-dimensional networks with 3 species (Section <ref>).
Let G be a bimolecular reaction network with at least 3 species. If there exists a vector of positive rate constants κ^* such that (G,κ^*) has ACR and is nondegenerately multistationary, then:
* G has at least 3 reactant complexes (and hence, at least 3 reactions), and
* G has at least 5 complexes (reactant and product complexes).
We first prove part (1). Let (G,κ^*) be as in the statement of the theorem and let n denote the number of species, where n ≥ 3.
By relabeling species, if needed, we may assume that (G,κ^*) has ACR in species X_1.
Let f_1,f_2,…, f_n denote the right-hand sides of the mass-action ODEs of (G,κ^*).
As (G,κ^*) has ACR, we know that at least one of the right-hand sides is nonzero. Let f_i denote one of these nonzero polynomials.
Assume for contradiction that G has only 1 or 2 reactant complexes. Since G admits a nondegenerate positive steady state,
Proposition <ref> implies that G is not full-dimensional and
G has exactly 2 reactant complexes.
We claim that all the right-hand sides f_ℓ are scalar multiples of each other.
More precisely, we claim that for all j ∈{1,2,…,n}∖{i}, there exists c_j ∈ℝ such that f_j=c_j f_i.
Indeed, each f_j has at most two monomials (because G has exactly two reactant complexes), so if f_j is not a constant multiple of f_i, then some ℝ-linear combination of f_i and f_j is a monomial and hence (G,κ^*) has no positive steady state (which is a contradiction).
Thus the positive steady states of (G,κ^*) are precisely the positive roots of f_i=0 and the linear equations given by the conservation laws.
Since X_1 is the ACR species and G is bimolecular, we must have f_i = (α - x_1) (β_0 + β_1 x_1 + … + β_n x_n),
where α is the (positive) ACR-value and β_j ∈ℝ for all j=0,1,…,n.
We consider several cases, based on how many of the coefficients β_2,β_3, …, β_n are nonzero.
We begin by considering the case when β_2=β_3=… = β_n=0.
In this case,
f_i is a (nonzero) polynomial in x_1 only, and so has the form f_i=γ_1 x_1^m_1 + γ_2 x_1^m_2, where γ_1,γ_2 ∈ℝ and 0 ≤ m_1 < m_2 ≤ 2. As (G,κ^*) has a positive steady state, we conclude that γ_1 and γ_2 are nonzero and have opposite signs. Now Lemma <ref> implies that i=1 (so, f_1=f_i ≠ 0) and f_2=f_3=…=f_n=0. In fact, Lemma <ref> implies that X_2, …, X_n are catalyst-only species of G (equivalently, the mass-action ODE right-hand sides for X_2,…, X_n are zero for all choices of positive rate constants).
Such a system is not multistatationary, which is a contradiction.
Now consider the case when two or more of the β_2,β_3, …, β_n are nonzero.
In this case,
there exist distinct j_1,j_2 (where 2 ≤ j_1,j_2 ≤ n) with
β_j_1,β_j_2≠ 0.
Then f_i contains the monomials x_j_1, x_j_2, x_1x_j_1, x_1x_j_2 which contradicts the fact that G has exactly two reactant complexes.
The final case is when exactly one of the β_2,β_3, …, β_n is nonzero. Relabel the species, if needed, so that β_2≠ 0.
In this case, the two reactant complexes of G involve only species X_1 and X_2. By using Lemma <ref> again, much like we did for the prior case, we conclude that X_3 ,…, X_n are catalyst-only species of G
and so (G,κ^*) is effectively the mass-action system of a (bimolecular) network with only two species, X_1 and X_2. Now it follows from Theorem <ref> that (G,κ^*) is not nondegenerately multistationarity, which contradicts our assumption. This completes part (1).
We prove part (2). Assume for contradiction that G has at most 4 complexes. By Proposition <ref>, the dimension of the stoichiometric subspace of G must be at least 2.
So, the deficiency of G satisfies
δ = m - ℓ - (S) ≤ 4 - 1 - 2 = 1 .
Hence, the deficiency of G is 0 or 1, and the latter requires G to have exactly one linkage class. Now Lemmas <ref>–<ref> imply that G is not multistationary, which is a contradiction.
Theorem <ref> gives a lower bound on the number of reactant complexes and the number of all complexes (reactants and products), and the next example shows that these bounds are tight. The example also shows the tightness of the lower bounds on the number of species and the dimension of the network (from Theorem <ref>).
Consider the following bimolecular network with 3 species and 3 reactant complexes and 5 complexes:
G = {X_1+X_2 2X_3, X_3 X_1 , 2X_3 2X_2} .
This network is two-dimensional, as
the total amount of X_1,X_2,X_3 is conserved. For every vector of positive rate constants κ, the system (G,κ) is nondegenerately multistationary and also has ACR in X_3 with ACR-value _̨2/ (2_̨3).
Details are given in the proof of Proposition <ref>, below, which pertains to a family of networks that includes the network G.
§.§.§ Non-full-dimensional networks
The bimolecular network in Example <ref> is the n=3 case of the networks G_n that we introduce in the next result. These networks have the property that every reactant complex is bimolecular, but (when n≥ 4) one of product complexes is not.
For all n ≥ 3, consider the following network with n species, n reactant complexes, and n reactions:
G_n = {X_1+X_2 2X_3+∑_j=4^nX_j, X_3 X_1 , 2X_3 2X_2}⋃{
X_4_̨4→ 0, … ,
X_n_̨n→ 0
} .
Each such network G_n satisfies the following:
* there is a unique (up to scaling) conservation law, which is given by x_1+x_2+x_3=T, where T represents the total concentration of species X_1,X_2,X_3; and
* for every vector of positive rate constants κ∈ℝ^n_>0, the system (G_n,κ) is nondegenerately multistationary and also has ACR in species X_3, X_4, …, X_n.
Fix n ≥ 3. The mass-action ODEs for G_n are as follows:
dx_1/dt =
-_̨1x_1x_2 + _̨2 x_3
dx_2/dt =
-_̨1x_1x_2 + 2_̨3 x_3^2
dx_3/dt =
2_̨1x_1x_2 - _̨2x_3 - 2_̨3x_3^2
dx_j/dt = _̨1 x_1 x_2 - _̨j x_j for j∈{4,…, n} .
The network G_n has exactly one conservation law (up to scaling), and it is given by x_1+x_2+x_3=T.
Additionally, using the first two ODEs, we compute that the value of species X_3 at all positive steady states is _̨2/2_̨3.
Next, we use this steady-state value for X_3, together with the first and fourth ODEs, to obtain the expression _̨2^2/2_̨3_̨n for the steady-state value for X_j, for j ≥ 4.
Thus, ACR in X_3,X_4,…, X_n will follow once we confirm the existence of positive steady states.
Next, we investigate the steady-state values of X_1 and X_2. Using the steady-state value of X_3, the conservation law, and the first ODE, we see that the steady-state values of X_1 and X_2 correspond to the intersection points of the line x_1+x_2 + _̨2/2_̨3 = T and the curve x_1 x_2 = _̨2^2/2_̨1_̨3. This is depicted qualitatively below (by [green] dashed lines and a [red] solid curve, respectively).
< g r a p h i c s >
It follows that, given any vector of positive rate constants κ∈ℝ^n_>0, when T is sufficiently large, there are two pairs of (nondegenerate) positive steady-state values for X_1 and X_2, and so
(G_n, κ) is nondegenerately multistationary (and thus admits a positive steady state, and so has ACR).
§.§.§ Full-dimensional networks with 3 species
Consider a bimolecular network G that has 3 species.
We saw that if G admits ACR and nondegenerate multistationarity simultaneously, then G has at least 3 reactant complexes
(Theorem <ref>).
If, however, G is full-dimensional, then more reactants are required, as stated in the following result.
Let G be a full-dimensional bimolecular reaction network with exactly 3 species. If there exists a vector of positive rate constants κ^* such that (G,κ^*) has ACR and is nondegenerately multistationary, then G has at least 5 reactant complexes (and hence at least 5 reactions)
Proposition <ref> is a direct consequence of
Propositions <ref>(3) and <ref>, and a stronger version of this result appears
in the next section (Theorem <ref>).
Proposition <ref> implies that if a full-dimensional bimolecular network with 3 species and fewer than 5 reactions has both ACR and multistationarity, then this coexistence happens in a degenerate way.
We illustrate this situation with two examples, and then characterize all such networks with exactly 4 reactant complexes (Proposition <ref>).
Consider the following full-dimensional network with 3 species, 4 reactions, and 4 reactant complexes:
{2Z → Z, X+Y → Z → Y+Z, 0 → X}.
When all rate constants are 1, the mass-action ODEs are as follows:
dx/dt = 1-xy
dy/dt = z - xy
dz/dt = -z^2 + xy .
For this system,
the set of positive steady states is { (x,y,z) ∈ℝ^3_>0| xy=z=1},
and every positive steady state is degenerate.
We conclude that this system is multistationary (but degenerately so) and has
ACR in species Z (with ACR-value 1).
Consider the following network:
{ X+Z→ Z, Y+Z ⇆ Y→ 0, 2X← X→ X+Y }.
Like the network in Example <ref>,
this network is full-dimensional and has
3 species,
4 reactions, and 4 reactant complexes; however, the set of reactant complexes differs.
When all rate constants are 1, the mass-action ODEs are as follows:
dx/dt = x-xz
dy/dt = x - y
dz/dt = y - yz .
For this system,
the set of positive steady states is { (x,y,z) ∈ℝ^3_>0| x=y, z=1},
and every positive steady state is degenerate.
Thus, this system is (degenerately) multistationary and has
ACR in species Z (with ACR-value 1).
The next result shows that Examples <ref> and <ref> cover all cases of three-species, four-reactant networks with ACR and (degenerate) ACR occurring together, in the sense that these two networks represent the only two possibilities for the set of reactant complexes
(when a certain full-rank condition is met, which we discuss below in Remark <ref>).
Let G be a full-dimensional bimolecular
reaction network with exactly 3 species – which we call X,Y,Z – and exactly 4 reactant complexes.
If κ^* is a vector of positive rate constants such that:
(a) rank(N) = 3,
where N is the matrix for (G,κ^*) as in (<ref>),
(b) (G, κ^*) has ACR in species Z, and
(c) (G,κ^*) is multistationary (which is degenerately so, by Proposition <ref>),
then the set of reactant complexes of G is either
{X, X+Z, Y, Y+Z} or
{0, X+Y, Z, 2Z}.
Let G, κ^*, and N be as in the statement of the proposition.
In particular,
G has 3 species and 4 reactants, and
(G,κ^*) admits a positive steady state, which we denote by
(x^*,y^*, α) (so α is the ACR-value of Z).
Also, N has rank 3 and so Proposition <ref>(2) and its proof imply
that steady-state equations can be “row-reduced” so that
the positive steady states of (G,κ^*) are the roots of 3 binomial equations of the following form:
h_1 := m_1 - β_1 m_4 = 0
h_2 := m_2 - β_2 m_4 = 0
h_3 := m_3 - β_3 m_4 = 0 ,
where
β_j ∈ℝ (for j=1,2,3) and
m_i=x^a_i y^b_i z^c_i (for i=1,2,3,4) are 4 distinct monic monomials given by the reactant complexes.
Also, each m_i (for i=1,2,3,4) has degree at most 2
in x,y,z (as G is bimolecular).
In other words, a_i,b_i,c_i are non-negative integers that satisfy the following:
a_i+b_i+c_i ≤ 2 .
We infer that β_1, β_2, β_3 >0, because otherwise h_1=h_2=h_3=0 would have no positive roots.
For i∈{1,2,3}, consider the following, where we recall that α is the ACR-value of Z:
g_i :=
h_i|_z = α =
d_i x^a_i y^b_i - d_i' x^a_4 y^b_4 ,
where
d_i:= α^c_i>0 and d_i':= β_i α^c_4>0.
For i∈{1,2,3},
by construction, g_i(x^*,y^*)=0 and so the subset of the positive quadrant ℝ^2_>0 defined by g_i=0, which we denote by S_i, is nonempty. There are four possible “shapes” for each set S_i:
* S_i = ℝ^2_>0, when (a_i,b_i)=(a_4,b_4) (and necessarily, d_i = d_i', to avoid S_i = ∅).
* S_i is the horizontal line y=y^*, when a_i = a_4 and b_i ≠ b_4.
* S_i is the vertical line x=x^*, when a_i ≠ a_4 and b_i = b_4.
* S_i is a strictly increasing curve (passing through (x^*,y^*)) defined by the following equation, when a_i ≠ a_4 and b_i ≠ b_4:
y = ( d_i/d'_i) ^ 1/b_4 - b_i x ^a_i - a_4 /b_4 - b_i .
Any two lines/curves of the form (2)–(4) either coincide or intersect only at (x^*,y^*). Hence,
the intersection S_1 ∩ S_2 ∩ S_3 is either
* the single point (x^*,y^*),
* a single line or curve of the form (2)-(4), or
* the positive quadrant ℝ^2_>0.
By construction and the fact that α is the ACR-value, the set of all positive steady states of (G,κ^*) is the set {(x,y, α ) | (x,y) ∈ S_1 ∩ S_2 ∩ S_3}. Hence, in the case of (a), (G,κ^*) is not multistationary, which is a contradiction.
Next, we show that case (c) does not occur. On the contrary, assume that it does. Then S_1=S_2=S_3=_> 0^2, which implies that (a_1,b_1)=(a_2,b_2)=(a_3,b_3) = (a_4,b_4).
Since m_1,m_2,m_3,m_4 are 4 distinct monomials,
it must be that c_1,c_2,c_3,c_4 are 4 distinct non-negative integers.
However, as noted earlier, c_i∈{0,1,2} for each i, which yields a contradiction.
Finally, we consider case (b).
This case happens only when one of the following subcases occur:
Subcase 1:
Exactly 1 of the 3 subsets S_i is the positive quadrant, and the other two coincide.
Without loss of generality, assume S_1 = ℝ^2_>0 and so S_2 = S_3 ≠ℝ^2_>0.
Hence,
(a_1,b_1)=(a_4,b_4) ≠
(a_2,b_2) = (a_3,b_3).
However, m_1 ≠ m_4 and m_2 ≠ m_3, and so:
c_1 ≠ c_4 and c_2 ≠ c_3 .
We rewrite the inequalities (<ref>),
using
the equalities
(a_1,b_1)=(a_4,b_4) and
(a_2,b_2) = (a_3,b_3):
a_1+b_1+c_1 ≤ 2 ,
a_1+b_1+c_4 ≤ 2 ,
a_2+b_2+c_2 ≤ 2 ,
a_2+b_2+c_3 ≤ 2 .
Finally, Lemma <ref> implies that each of species X and Y takes part in some reactant complex, so we obtain the following (again using (a_1,b_1)=(a_4,b_4) and
(a_2,b_2) = (a_3,b_3)):
a_1 + a_2 ≥ 1
and
b_1 + b_2 ≥ 1 .
The only non-negative solutions to the conditions in (<ref>), (<ref>), and (<ref>) are as follows:
* a_1=a_3=1, a_2=a_3=0, b_1=b_4=0, b_2=b_3=1, {c_1,c_4} = {c_2,c_3} = {0,1};
* a_1=a_3=0, a_2=a_3=1, b_1=b_4=1, b_2=b_3=0, {c_1,c_4} = {c_2,c_3} = {0,1}.
In all of these solutions, the set of reactant complexes is {X, X+Z, Y, Y+Z}.
Subcase 2:
Exactly 2 of the 3 subsets S_i are the positive quadrant.
Without loss of generality, assume that
S_1 = S_2 = ℝ^2_>0≠ S_3. This implies the following:
(a_1,b_1) = (a_2,b_2) = (a_4,b_4) ≠ (a_3,b_3) .
However, m_1,m_2,m_4 are 3 distinct monomials, so c_1, c_2,c_4 are
3 distinct non-negative integers. Now inequality (<ref>) implies that
{c_1, c_2,c_4} = {0,1,2}. Let i^* ∈{1,2,4} be such that c_i^*=2.
Next,
the equalities in (<ref>) and the
inequality (<ref>) for i=i^* together imply that
(a_1,b_1)=(a_2,b_2)=(a_4,b_4)=(0,0).
Therefore, the set of reactant complexes corresponding to m_1,m_2,m_4 is
{0, Z, 2Z}.
Finally, Lemma <ref> implies that the fourth reactant complex must involve both X and Y and so (by bimolecularity) is X+Y.
Therefore, the set of reactant complexes is {0, X+Y, Z, 2Z}.
Subcase 3:
None of the subsets S_i are positive quadrants, and the 3 sets coincide.
This implies that (a_1,b_1)=(a_2,b_2)=(a_3,b_3) ≠ (a_4,b_4).
These conditions are symmetric to those in subcase 2, and so the reactant complexes are {0, Z, 2Z, X+Y}. This completes subcase 3 (and case (b)).
Proposition <ref> includes the hypothesis that the matrix N for the system (G,κ^*) has (full) rank 3.
If we remove this hypothesis, we can obtain more full-dimensional networks with 3 species and 4 reactants that allow ACR and (degenerate) multistationarity to occur together.
We present one such network in Example <ref>.
Consider the following full-dimensional network with 3 species and 4 reactants:
G:={0→ X→ Y → 2 Y, Y← Y+Z → 2 Z} .
The system (G,κ^*) obtained by setting all the reaction rates to 1 has the following ODEs:
[ dx/dt; dy/dt; dz/dt ] = [ -1 0 0 1; 1 1 -1 0; 0 0 0 0; ][ x; y; yz; 1 ] = N
[ x; y; yz; 1 ] .
The matrix N (defined above) has rank 2, the set of positive steady states is { (x,y,z) ∈ℝ^3_>0| x=1, y(z-1)=1}, and every positive steady state is degenerate.
Thus, this system is (degenerately) multistationary and has
ACR in species X (with ACR-value 1).
In the next section, we see that the exceptional networks in Proposition <ref> – namely, full-dimensional, three-species networks with reactant-complex set {0,Z,2Z,X+Y} or {X,Y,X+Z,Y+Z} – do not have unconditional ACR. Indeed, this fact is a direct consequence of a more general result concerning networks with n species and n+1 reactants (Theorem <ref>).
§ MAIN RESULTS ON GENERAL NETWORKS
The results in the prior section pertain to networks that are bimolecular, while here we analyze networks that need not be bimolecular.
We consider full-dimensional networks (Section <ref>) and non-full-dimensional networks (Section <ref>) separately.
§.§ Full-dimensional networks
In Proposition <ref>, we saw a family of networks
that admit
ACR and nondegenerate multistationarity together.
These networks
have n reactants (where n is the number of species), but are not full-dimensional.
In this subsection, we show that for full-dimensional networks,
the coexistence of
ACR and nondegenerate multistationarity requires at least n+2 reactants (Theorem <ref>).
We also show that this lower bound is tight (Proposition <ref>).
Additionally, we consider full-dimensional networks with only n+1 reactants and show that if such a network is multistationary (even if only degenerately so), then the network can not have unconditional ACR
(Theorem <ref>).
Let G be a full-dimensional reaction network with n
species.
If there exists a vector of positive rate constants κ^* such that the mass-action system (G,κ^*) has ACR and also is nondegenerately multistationary, then
n ≥ 2 and
G has at least n+2 reactant complexes and hence, at least n+2 reactions.
It follows readily from definitions that ACR and multistationarity do not coexist in networks with only one species, so n ≥ 2.
We proceed by contrapositive.
We consider two cases.
If G has at most n reactant complexes, then
Proposition <ref> (which requires n ≥ 2) implies that every positive steady state of (G,κ^*) is degenerate and so (G,κ^*) is not nondegenerately multistationary.
In the remaining case, when G has n+1 reactant complexes, Proposition <ref>(3)
implies that (G,κ^*) is not nondegenerately multistationary.
The next example shows that the bound in Theorem <ref> is tight for n=2.
The following network is full-dimensional and has 2 species, 4 reactant complexes, and 4 reactions (the out-of-order labeling of the rate constants is to be consistent with Proposition <ref>, which appears later):
{
A+B κ_1 → 2B κ_3 → 2B+A
,
B κ_2 → 0 κ_4 → A
} .
Observe that all reactant complexes are bimolecular, but one of the product complexes is not.
In the next result, we show that this network exhibits ACR (in species A with ACR-value κ_2/κ_1) and nondegenerate multistationarity when κ_2^2 > 4 κ_3 κ_4. (Proposition <ref>).
Among full-dimensional networks for which ACR and nondegenerate multistationarity coexist,
this network is optimal
in the sense that it has the fewest possible
species, reactant complexes, and reactions (by
Theorem <ref>).
In the next result, we generalize the network in Example <ref> to a family of networks that show that the lower
bound on the number of reactions in Theorem <ref> is tight for all n.
The networks in the following result are also optimal in terms of the molecularity of the reactant complexes (they are bimolecular), although two of the product complexes have high molecularity.
For all n ≥ 2, consider the following full-dimensional network with
n species, n+2 reactions, and n+2 reactant complexes:
G_n = {
X_1+X_2 κ_1 → 2X_2 +
X_3 + … + X_n
,
X_2 κ_2 → 0,
2X_2 κ_3 → 2X_2+X_1
,
0 κ_4 → X_1
}
⋃{ X_jκ_j+2→ 0 | 3 ≤ j ≤ n}.
For every vector of positive rate constants κ^* for which (κ^*_2)^2 > 4 κ^*_3 κ^*_4, the system (G_n, κ^*) has nondegenerate multistationarity and has ACR in species X_1.
The mass-action ODEs are given by:
dx_1/dt = κ_3 x_2^2 - κ_1 x_1 x_2 + κ_4
dx_2/dt = κ_1 x_1 x_2-κ_2x_2
dx_j/dt = κ_1 x_1 x_2 - κ_j+2 x_j for j∈3,…, n.
The steady-state equation for X_2 implies that x_1=κ_2/κ_1 at all positive steady states, so there is ACR in X_1 (whenever positive steady states exist). Next, the steady-state equations for X_1 and X_2 imply that the steady state values of X_2 are
x_2^± :=
k_2 ±√(k_2^2 - 4k_3k_4)/2 k_3. Both of these steady state values are positive precisely when the discriminant k_2^2 - 4k_3k_4 is positive (this is a straightforward computation; alternatively, see <cit.>). Now we use the steady-state equation for X_j, where j≥ 3, to compute the two positive steady states that exist whenever (κ^*_2)^2 > 4 κ^*_3 κ^*_4:
(
x_1^*,
x_2^+, κ_1/κ_3 x_1^*x_2^+, …,
κ_1/κ_n x_1^*x_2^+)
and (
x_1^*,
x_2^-, κ_1/κ_3 x_1^*x_2^-, …,
κ_1/κ_n x_1^*x_2^-) ,
where x^*_1:=κ_2/κ_1.
Finally, nondegeneracy can be checked by a computing the Jacobian matrix.
Our next result concerns full-dimensional networks with one more reactant than the number of species, as follows.
Let G be a full-dimensional network, with n species and exactly n+1 reactant complexes. If
G is multistationary, then there exists a vector of positive rate constants κ such that (G,κ) has no positive steady states, and hence G does not have unconditional ACR.
Assume that G is full-dimensional, has exactly n+1 reactant complexes (where n is the number of species), and is multistationarity. By definition, there exists κ^* ∈ℝ^r_>0, where r is the number of reactions, such that (G,κ^*) is multistationary. Let N be the n × (n+1) matrix arising from (G,κ^*), as in (<ref>); and let A be the n × n matrix defined by G, as in Proposition <ref>.
We claim that rank(N) ≤ n-1 or rank(A) ≤ n-1. Indeed, if rank(N) = n and rank(A) = n, then the proof of Proposition <ref> shows that (G,κ^*) is not multistationary, which is a contradiction.
If rank(N) ≤ n-1, then Proposition <ref>(2) implies that
there exists
κ^**∈ℝ^r_>0 such that (G,κ^**) has no positive steady states.
Similarly, in the remaining case, when rank(N) =n and rank(A) ≤ n-1,
the desired result follows directly from
Proposition <ref>(2).
§.§ Non-full-dimensional networks
In an earlier section, we saw a family of networks
with n species, n reactant complexes, and exactly one conservation law, for which
ACR and nondegenerate multistationarity coexist (Proposition <ref>). Our next result shows that this n
is the minimum number of reactant complexes (when there is one conservation law), and, furthermore, as the number of conservation laws increases, the minimum number of reactant complexes required decreases.
Let G be a reaction network with n ≥ 3 species and k ≥ 1 conservation laws (more precisely, G has dimension n-k). If there exists a vector of positive rate constants
κ^*
such that the system (G,κ^*) is nondegenerately multistationary and has ACR in some species,
then G has at least n-k+1 reactant complexes.
If G has k ≥ 1 conservation laws and at most n-k reactant complexes, then Propositions <ref>(1) and
<ref> together imply that G is not nondegenerate multistationarity.
As noted earlier, the bound in Theorem <ref>
is tight for k=1, due to Proposition <ref>.
We also know that, for k=n-1, the bound holds vacuously (Proposition <ref>). Our next result shows that the bound is also tight for all remaining values of k (namely, 2 ≤ k ≤ n-2).
Let n ≥ 3, and let k ∈{2,3,…, n-2}. If k ≠ n-2, consider the following network:
G_n,k = {X_1+X_2+∑_j=n+2-k^nX_j 2X_3+∑_j=4^nX_j, X_3 X_1 , 2X_3 2X_2}
⋃{
X_4_̨4→ 0, … ,
X_n+1-k_̨n-k+1→ 0
} .
On the other hand, if k = n-2, consider the following network:
G_n,k = {X_1+X_2+∑_j=4^nX_j 2X_3+∑_j=4^nX_j, X_3 X_1 , 2X_3 2X_2}
Each such network G_n,k satisfies the following:
* G_n,k has n species, n-k+1 reactants, and n-k+1 reactions;
* G_n,k has dimension n-k, and the following are k linearly independent conservation laws: x_1+x_2+x_3=T and x_j = T_j for j∈{n-k+2,…,n}.
* for every vector of positive rate constants κ, the system (G_n,κ) is nondegenerately multistationary and also has ACR in species X_3, X_4, …, X_n-k+1.
This result can be checked directly, in a manner similar to the proof of
Proposition <ref>.
Indeed, for every vector of positive rate constants, there is ACR in species X_3,X_4, …,X_n-k+1 and exactly two nondegenerate positive steady states when T is large enough.
The reaction networks in Proposition <ref> are not bimolecular, and they contain reactions with many catalyst-only species (namely X_n-k+2,…,X_n).
We do not know whether there exist reaction networks
that
are bimolecular and do not contain reactions with catalyst-only species, and yet (like the networks in Proposition <ref>)
show that the lower
bound in Theorem <ref> is tight.
§ DISCUSSION
In this article, we proved lower bounds in terms of the dimension and the numbers of species, reactant complexes (and thus reactions), and all complexes (both reactant and product complexes) needed for the coexistence of ACR and nondegenerate multistationarity. Additionally, we
showed that these bounds are tight, via the network
{A+B → 2C → 2B, C→ A } (Example <ref>).
Networks like the one in Example <ref> contain special structures that may be biologically significant.
Exploring such structures will aid in establishing design principles for creating networks with ACR and multistationarity. We plan to explore such networks and their architecture in the future.
In the present work, our interest in multistationarity comes from the fact that it is a necessary condition for multistability. Another interesting direction, therefore, is to investigate conditions for coexistence of ACR and multistability, rather than multistationarity. The “minimal" networks in the current work admit only two positive steady states and are not multistable. Hence, we conjecture that the lower bounds (on dimension and the numbers of species, reactant complexes, and all complexes) for the coexistence of ACR and multistability are strictly larger than the bounds proven here for ACR and multistationarity.
Finally, we are interested in the conditions for the coexistence of other combinations of biologically significant dynamical properties, such as ACR and oscillations. In addition to the minimum requirements for their coexistence, we also hope to discover new network architectures or motifs that can be used to design synthetic networks possessing these dynamical properties.
§.§ Acknowledgements
This project began at an AIM workshop on “Limits and control of stochastic reaction networks” held online in July 2021.
AS was supported by the NSF (DMS-1752672). The authors thank Elisenda Feliu, Oskar Henriksson, Badal Joshi, and Beatriz Pascual-Escudero for many helpful discussions.
plain
10
AC:non-mass
David. F. Anderson and Simon. L. Cotter.
Product-form stationary distributions for deficiency zero networks
with non-mass action kinetics.
B. Math. Biol., 78(12), 2016.
ACK:product
David F. Anderson, Gheorghe Craciun, and Thomas G. Kurtz.
Product-form stationary distributions for deficiency zero chemical
reaction networks.
B. Math. Biol., 72(8), 2010.
AN:non-mass
David F. Anderson and Tung D. Nguyen.
Results on stochastic reaction networks with non-mass action
kinetics.
Math. Biosci. Eng., 16(4):2118–2140, 2019.
splitting-banaji
Murad Banaji.
Splitting reactions preserves nondegenerate behaviours in chemical
reaction networks.
SIAM J. Appl. Math., 83(2):748–769, 2023.
banaji-boros
Murad Banaji and Balázs Boros.
The smallest bimolecular mass action reaction networks admitting
Andronov–Hopf bifurcation.
Nonlinearity, 36(2):1398, 2023.
banaji-boros-hofbauer-3-rxn
Murad Banaji, Balázs Boros, and Josef Hofbauer.
Oscillations in three-reaction quadratic mass-action systems.
Preprint, arXiv:2304.02303.
boros2019existence
Balázs Boros.
Existence of positive steady states for weakly reversible mass-action
systems.
SIAM J. Math. Anal., 51(1):435–449, 2019.
CIK
Carsten Conradi, Alexandru Iosif, and Thomas Kahle.
Multistationarity in the space of total concentrations for systems
that admit a monomial parametrization.
B. Math. Biol., 81:4174–4209, 2019.
Deng
Jian Deng, Martin Feinberg, Chris Jones, and Adrian Nachman.
On the steady states of weakly reversible chemical reaction networks.
Preprint, arXiv:1111.2386.
dennis-shiu
Allison Dennis and Anne Shiu.
On the connectedness of multistationarity regions of small reaction
networks.
Preprint, arXiv:2303.03960.
F1
Martin Feinberg.
Complex balancing in general kinetic systems.
Arch. Ration. Mech. Anal., 49:187–194, 1972.
FeinbergLec79
Martin Feinberg.
Lectures on chemical reaction networks.
Delivered at the Mathematics Research Center, Univ. Wisc.-Madison.
Available at http://crnt.engineering.osu.edu/LecturesOnReactionNetworks, 1979.
Feinberg1987
Martin Feinberg.
Chemical reaction network structure and the stability of complex
isothermal reactors–I. The deficiency zero and deficiency one theorems.
Chem. Eng. Sci., 42(10):2229–2268, 1987.
FeinDefZeroOne
Martin Feinberg.
The existence and uniqueness of steady states for a class of chemical
reaction networks.
Arch. Ration. Mech. Anal., 132(4):311–370, 1995.
feliu-henriksson-pascual
Elisenda Feliu, Oskar Henriksson, and Beatriz Pascual-Escudero.
Dimension and degeneracy of solutions of parametric polynomial
systems arising from reaction networks.
Preprint, arXiv:2304.02302.
HT
Vera Hars and János Tóth.
On the inverse problem of reaction kinetics.
Qualitative Theory of Differential Equations, 30:363–379,
1979.
H
Fritz Horn.
Necessary and sufficient conditions for complex balancing in chemical
kinetics.
Arch. Ration. Mech. Anal., 49:172–186, 1972.
H-J1
Fritz Horn and Roy Jackson.
General mass action kinetics.
Arch. Ration. Mech. Anal., 47:187–194, 1972.
joshi-kaihnsa-nguyen-shiu-1
Badal Joshi, Nidhi Kaihnsa, Tung D. Nguyen, and Anne Shiu.
Prevalence of multistationarity and absolute concentration robustness
in reaction networks.
Preprint, arXiv:2301.10337.
atoms_multistationarity
Badal Joshi and Anne Shiu.
Atoms of multistationarity in chemical reaction networks.
J. Math. Chem., 51:153–178, 2013.
joshi2017small
Badal Joshi and Anne Shiu.
Which small reaction networks are multistationary?
SIAM J. Appl. Dyn. Syst., 16(2):802–833, 2017.
Kholodenko2010
Boris N. Kholodenko, John F. Hancock, and Walter Kolch.
Signalling ballet in space and time.
Nat. Rev. Mol. Cell Bio., 11:414–426, June 2010.
lin-tang-zhang
Kexin Lin, Xiaoxian Tang, and Zhishou Zhang.
Multistationarity of reaction networks with one-dimensional
stoichiometric subspaces.
CSIAM Trans. Appl. Math., 3: 564–600, 2022.
acr-dim-1
Eduardo R Mendoza, Dylan Antonio SJ Talabis, Editha C Jose, and Lauro L
Fontanil.
Absolute concentration robustness in rank-one kinetic systems.
Preprint, arXiv:2304.03611.
MST
Nicolette Meshkat, Anne Shiu, and Angelica Torres.
Absolute concentration robustness in networks with low-dimensional
stoichiometric subspace.
Vietnam J. Math., 50:623–651, 2022.
mv-small-networks
Nida Obatake, Anne Shiu, and Dilruba Sofia.
Mixed volume of small reaction networks.
Involve, 13:845–860, 2020.
pantea-voitiuk
Casian Pantea and Galyna Voitiuk.
Classification of multistationarity for mass action networks with
one-dimensional stoichiometric subspace.
Preprint, arXiv:2208.06310.
ACR
Guy Shinar and Martin Feinberg.
Structural sources of robustness in biochemical reaction networks.
Science, 327(5971):1389–1391, 2010.
shiu2019nondegenerate
Anne Shiu and Timo de Wolff.
Nondegenerate multistationarity in small reaction networks.
Discrete Contin. Dyn. Syst. - Ser. B, 24(6), 2019.
tang-wang-hopf
Xiaoxian Tang and Kaizhang Wang.
Hopf bifurcations of reaction networks with zero-one stoichiometric
coefficients.
Preprint, arXiv:2208.04196.
MR4241183
Xiaoxian Tang and Hao Xu.
Multistability of small reaction networks.
SIAM J. Appl. Dyn. Syst., 20(2):608–635, 2021.
tang-zhang
Xiaoxian Tang and Zhishou Zhang.
Multistability of reaction networks with one-dimensional
stoichiometric subspaces.
SIAM J. Appl. Dyn. Syst., 21(2):1426–1454, 2022 .
tonello2017network
Elisa Tonello and Matthew D. Johnston.
Network translation and steady state properties of chemical reaction
systems.
B. Math. Biol., 80(9):2306–2337, 2018.
tyson-albert
John J Tyson, Reka Albert, Albert Goldbeter, Peter Ruoff, and Jill Sible.
Biological switches and clocks.
J. R. Soc. Interface, 5:S1–S8, 2008.
smallestHopf
Thomas Wilhelm and Reinhart Heinrich.
Smallest chemical reaction system with Hopf bifurcation.
J. Math. Chem., 17(1):1–14, 1995.
Authors' addresses:
Nidhi Kaihnsa,
University of Copenhagen [email protected]
Tung Nguyen, Texas A&M University
[email protected]
Anne Shiu, Texas A&M University
[email protected]
|
http://arxiv.org/abs/2307.04229v1 | 20230709170952 | Frequency-Domain Model of Microfluidic Molecular Communication Channels with Graphene BioFET-based Receivers | [
"Ali Abdali",
"Murat Kuscu"
] | cs.ET | [
"cs.ET"
] |
Frequency-Domain Model of Microfluidic Molecular Communication Channels with Graphene BioFET-based Receivers
Ali Abdali, Student Member, IEEE,
and Murat Kuscu, Member, IEEE
The authors are with the Nano/Bio/Physical Information and Communications Laboratory (CALICO Lab), Department of Electrical and Electronics Engineering, Koç University, Istanbul, Turkey (e-mail: {aabdali21, mkuscu}@ku.edu.tr).
This work was supported in part by EU Horizon 2020 MSCA-IF under Grant #101028935, and by The Scientific and Technological Research Council of Turkey (TUBITAK) under Grant #120E301.
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Molecular Communication (MC) is a bio-inspired communication paradigm utilizing molecules for information transfer. Research on this unconventional communication technique has recently started to transition from theoretical investigations to practical testbed implementations, primarily harnessing microfluidics and sensor technologies. Developing accurate models for input-output relationships on these platforms, which mirror real-world scenarios, is crucial for assessing modulation and detection techniques, devising optimized MC methods, and understanding the impact of physical parameters on performance. In this study, we consider a practical microfluidic MC system equipped with a graphene field effect transistor biosensor (bioFET)-based MC receiver as the model system, and develop an analytical end-to-end frequency-domain model. The model provides practical insights into the dispersion and distortion of received signals, thus potentially informing the design of new frequency-domain MC techniques, such as modulation and detection methods. The accuracy of the developed model is verified through particle-based spatial stochastic simulations of pulse transmission in microfluidic channels and ligand-receptor binding reactions on the receiver surface.
Molecular communications, receiver, frequency-domain model, graphene bioFETs, microfluidics, ligand-receptor interactions
§ INTRODUCTION
Molecular Communications (MC) is a bio-inspired communication paradigm that uses molecules as information carriers <cit.>. The unique properties of MC, such as biocompatibility, energy efficiency, and reliability under complex and dynamic physiological conditions, are promising for enabling seamless interactions among natural/synthetic cells and micro/nanoscale devices, so-called bio-nano things. Through the emerging Internet of Bio-Nano Things (IoBNT) framework, MC is expected to usher in a new era of unparalleled healthcare and environmental applications at the intersection of information communication technologies, biotechnology, and nanotechnology <cit.>.
MC research has predominantly focused on the development of theoretical channel models, modulation, detection and coding schemes, as well as the design of transmitter and receiver architectures <cit.>. Recent progress in the field has facilitated the integration of experimental validations with theoretical studies, utilizing MC testbeds of varying scales and sophistication. Notably, some of these testbeds, due to their scalability to micro/nanoscales, have the potential to serve as an ideal link between theoretical frameworks and practical applications of MC. Microfluidics technology plays a pivotal role in these practical investigations, as it enables the testing of diverse MC channels while offering comprehensive control over system parameters, such as flow conditions and channel geometry. Moreover, microfluidic channels closely mimic blood vessels and other biological microenvironments, characterized by convection-diffusion-based molecular transport processes <cit.>.
Integrating chemical sensors into microfluidic chips has augmented the utility of these testbeds, with sensors acting as MC receivers that vary in material, geometry, and transduction processes. Among these, affinity-based field-effect transistor biosensors (bioFETs) have emerged as compelling MC receiver architectures due to their inherent signal amplification, miniaturization capabilities, and ligand receptor-based interfaces that provide control over selectivity and sensitivity, resembling biological cells performing molecular sensing. Graphene bioFETs have garnered particular attention owing to the graphene's flexibility, two-dimensional (2D) geometry, and capacity to be functionalized with various bioreceptors, including DNA aptamers and proteins <cit.>. Initial investigations involving practical microfluidic MC systems equipped with graphene bioFET-based MC receivers have already unveiled crucial insights into the effects of convection, diffusion, ligand-receptor (LR) binding reactions, and receiver material properties on the MC performance <cit.>.
Despite these developments, the majority of theoretical and practical studies still primarily focus on the time-domain aspects of MC systems. This can be attributed to the fundamentally distinct nature of the information carriers, i.e., discrete molecules, which lead to ambiguities and complications in defining carrier waves and frequencies for this unconventional communication technique. Additionally, the innate nonlinearity and time-variance of MC communication systems pose challenges to the adoption of frequency-domain techniques as widely-utilized tools in MC research. Nevertheless, when operating regimes can be characterized or approximated as linear and time-invariant (LTI), exploring the frequency-domain features of MC systems can yield crucial insights regarding channel characteristics, such as bandwidth, as well as dispersion and distortion of the transmitted signals. This approach also offers a deeper understanding of the impact of various system parameters on communication performance, including channel geometry, LR binding kinetics at the channel/receiver interface, and the electrical characteristics of the transducer channel within the receiver. Moreover, frequency-domain models can enable the adoption of sophisticated communication tools and methods from conventional EM, including transfer functions and filters, to optimize MC systems and develop new communication techniques, such as frequency-domain pulse-shaping, modulation and detection techniques.
There has been a limited focus on frequency-domain analysis in MC. A notable contribution is the frequency-domain model for diffusion-based MC systems developed in <cit.>, which allows the determination of the end-to-end normalized gain and delay of the MC system as a function of frequency. In <cit.>, a frequency-domain equalizer (FDE) was proposed to address the inter-symbol interference (ISI) problem in MC. The transfer function of the MC channel, considering only diffusion-based transport, was derived in <cit.>.
In our recent study <cit.>, we introduced a frequency-domain detection technique for MC to estimate the concentration of information molecules in the presence of interfering molecules, leveraging LR binding kinetics. This method employs the power spectral density (PSD) of binding noise, which exhibits unique properties for each molecule type, enabling the differentiation of information molecules from interferers in the frequency-domain.
In this study, we present an end-to-end frequency-domain system model for a microfluidic MC channel employing a graphene bioFET-based MC receiver with ligand receptors on its surface for detecting molecular messages carried by information molecules (i.e., ligands). We consider transmitted signals as finite-duration molecular concentration pulses. We partition the end-to-end MC system into three subsystems: (i) the microfluidic propagation channel, where ligand propagation is governed by convection and diffusion; (ii) the channel/receiver interface, where the receiver's surface receptors interact with propagating ligands; and (iii) the graphene bioFET-based receiver, which transduces the number of bound receptors into an output electrical current. By employing LTI approximations, we analyze each subsystem independently and derive their transfer functions. We then combine these to obtain the end-to-end MC system's transfer function. The developed frequency-domain model is validated through particle-based spatial stochastic simulations using Smoldyn, an open-source simulation framework <cit.>. The simulation results show a strong agreement with the developed analytical frequency-domain model. We also examine the impact of various system parameters, such as pulse width of input signals, diffusion coefficient of ligands, and binding and unbinding rates of LR pairs, on the transfer function. Additionally, we leverage the developed model to determine the minimum sampling frequency for digitizing the output current by identifying the cutoff frequency and applying the Nyquist–Shannon theorem.
The remainder of this paper is organized as follows. Section <ref> offers an in-depth analysis of the three key components of the microfluidic MC system, followed by the development of the end-to-end frequency-domain model, which is then utilized to obtain the output signal. Section <ref> presents the simulation results intended to validate the developed model. Lastly, Section <ref> delivers concluding remarks.
§ END-TO-END FREQUENCY-DOMAIN MODEL
In this section, we present the derivation of the end-to-end frequency-domain model for the microfluidic MC system, depicted in Fig. <ref>(a). The system comprises a rectangular cross-section microfluidic channel in which molecular signals are uniformly transmitted across the cross-section of the channel inlet. The microfluidic channel is assumed to be open-ended, with a two-dimensional graphene bioFET-based biosensor serving as the receiver, positioned at the bottom of the channel without obstructing molecular propagation.
To establish the end-to-end model, transfer functions are derived for three subsystems: propagation of ligands within the microfluidic MC channel, LR binding interactions at the receiver surface, and molecular-to-electrical transduction process within the graphene bioFET-based MC receiver. The block diagram illustrating the end-to-end microfluidic MC system is provided in Fig. <ref>(b).
§.§ Transfer Function of Microfluidic Propagation Channel
We consider a straight microfluidic channel with a rectangular cross-section which is filled up with an electrolyte as the medium of propagation for molecular signals. The transmitter is located at the entrance of the microfluidic channel, while the receiver, a graphene bioFET, is situated at the base of the channel at position x=x_r as shown in Fig. <ref>(a). The surface of the graphene transduction channel of the receiver is functionalized with selective receptors, which are exposed to ligands of time-varying concentration. The receiver senses the concentration of ligands, flowing over its surface, through LR binding reactions. The flow is unidirectional from the inlet to the open outlet of the microfluidic channel.
The convection-diffusion equation, which is a linear partial differential equation, describes the behavior of mass transport of ligands within the microfluidic channel as <cit.>
∂ϕ/∂ t= -u ∇ϕ + D ∇ ^2ϕ,
where ϕ is the concentration of the released ligands, D is the diffusion coefficient of the ligands, u is the fluid flow velocity. The convection-diffusion equation describes the spatiotemporal evolution of the ligand concentration profile, ϕ, through the convective term (-u ∇ϕ) and the diffusive term (D ∇ ^2ϕ).
In this study, we consider a uniform and unidirectional fluid flow solely along the x-axis, i.e., u = u_x. To simplify the analysis of ligand transport, we adopt a one-dimensional (1D) approximation, focusing primarily on the x-axis. This assumption is justified when the ligand propagation is predominantly unidirectional, and lateral dispersion is much smaller than longitudinal transport. Such conditions arise due to the design and flow characteristics of the microfluidic channel, as discussed in <cit.>. The validity of this approximation can be quantified using the Péclet number, defined as Pe = u_x l / D. In this definition, l denotes the characteristics length, and for our case, corresponds to the distance between the transmitter and the receiver, i.e., l = x_r. When Pe ≫ 1, the 1D approximation is valid, which is consistent with the model parameter values considered in this study. Accordingly, the 1D solution of (<ref>) for an input concentration signal in the form of an impulse at the origin (i.e., ϕ_in(x, t) = δ(x - x_0, t - t_0) with x_0 = 0, t_0 = 0), gives the impulse response of the microfluidic propagation channel:
h_p(x,t)= 1/√(4π D t)exp(-(x-u_xt)^2/4Dt).
The propagation delay, τ, is the time it takes for the peak ligand concentration to travel a distance of x from the channel inlet, given by τ = x/u_x for Pe ≫ 1. The received concentration, ϕ_r, in the time-domain for an input concentration of ϕ_in in a straight microfluidic MC channel can be calculated through the convolution of the input concentration and the impulse response as follows:
ϕ_r (l)=(h∗ϕ_in)(l)= ∫^+∞_-∞ h_p(x)ϕ_in(l-x)dx.
For a rectangular finite-duration
concentration pulse input with a pulse width of T_p and amplitude of C_m, the received concentration at x=l can be calculated using (<ref>) as follows:
ϕ_r(l)= C_m/2((tu-l+T_pu/2√(Dt)) -(tu-l/2√(Dt))).
The transfer function of the propagation channel, which represents the frequency response to an impulse signal, can be derived by solving the frequency-domain counterpart of the 1D convection-diffusion equation (<ref>) obtained using the Fourier Transform (FT):
j 2π f Φ(x,f) = - u ∂Φ(x,f)/∂ x+ D ∂^2Φ(x,f)/∂ x^2,
where, Φ(x,f), the spectral density of ligand concentration, is obtained by taking the FT of the time-domain ligand concentration signal, i.e., Φ(x,f)= ℱ(ϕ(x,t)) <cit.>.
Assuming |(8π fD)/u^2|<1 to have a converging series expansion in solution of (<ref>), and fixing x = x_r, the receiver's central position, we can analytically approximate the transfer function of the microfluidic propagation channel, i.e., H_p(f), as follows <cit.>
H_p(f) = H_p(x = x_r,f)
≈exp(-((2π f)^2D/u^3+j2π f/u) x_r).
Spectral density of the received concentration, Φ_r(f) = Φ(x=x_r, f), can be obtained via multiplication of the transfer function, H_p(f), and the spectral density of the input ligand concentration signal, Φ_in(f) = Φ_in(x = 0, f), as follows
Φ_r(f) ≈ H_p(f)Φ_in(f).
Note that the spectral density of a rectangular finite-duration concentration pulse input signal with amplitude C_m and pulse width T_p can be obtained by taking FT as
Φ_in(f)= ℱ{C_mrect(t/T_p - 0.5)} = C_m T_p(f T_p),
where rect(t) = 1 for -0.5 < t < 0.5 is the rectangular function. Therefore, combining (<ref>), (<ref>), and (<ref>), spectral density of the received ligand concentration signal can be approximated as follows:
Φ_r(f) ≈ C_m T_p(f T_p)
×exp(-((2π f)^2D/u^3+j2π f/u) x_r).
§.§ Transfer Function of the Ligand-Receptor Binding Process
Propagating ligands encoding information bind to the receptors on the graphene bioFET-based MC receiver surface randomly and reversibly such that a formed LR complex dissociates after a random time duration. In the case of a monovalent reaction, where receptors have only one binding site and can be in one of the two states, unbound (U) or bound (B), the reversible LR binding interactions can be described in terms of reaction rates as follows
U <=>[ϕ_r(t) k_+][k_-] B,
where k_+ and k_- are the binding and unbinding rates of the LR pair, and ϕ_r(t) is the time-varying ligand concentration in the vicinity of receptors <cit.>, assuming that receptors are exposed to the same ligand concentration at all times, and the number of ligands bound to receptors is much lower than the number of ligands in the vicinity of the receptors such that the ligand concentration ϕ_r(t) can be assumed to remain unchanged during the LR binding interactions. The number of bound receptors as a function of time, i.e., N_b(t), can then be written as
dN_b(t)/dt= k_+ (N_r - N_b(t)) ϕ_r(t) - k_- N_b(t),
where N_r is the total number of receptors on the receiver surface. The second-order reaction represented by the above nonlinear equation can be simplified as a first-order reaction, if the total number of receptors is much higher than the number of bound receptors at all times <cit.>, yielding a linear equation:
dN_b(t)/dt= k_+ N_rϕ_r(t) - k_- N_b(t).
The condition of first-order reaction can be quantitatively formulated as k_+ϕ_r(t)≪ k_- <cit.>, ensuring the number of bound receptors is comparatively low with a high unbinding rate. The transfer function of the LR binding process can then be obtained by solving the frequency-domain equivalent of (<ref>):
j 2 π f N_b(f) = k_+ N_rΦ_r(f) - k_- N_b(f),
considering Φ_r(f) as the input signal and N_b(f) as the output signal:
H_lr(f) = k_+ N_r/k_- + j 2 π f.
The transfer function of the LR binding process corresponds to that of a low-pass filter, characterized by a cutoff frequency of f_c,lr = k_-/2π.
Utilizing (<ref>), the spectral density of the output signal, i.e., time-varying number of bound receptors, can be obtained as follows:
N_b(f) = H_lr(f) Φ_r(f) = k_+ N_r/k_- + j 2 π fΦ_r(f).
§.§ Transfer Function of Graphene BioFET-based MC Receiver
We consider a graphene bioFET-based MC receiver fabricated on a Si/SiO_2 substrate with a monolayer graphene, which is connected to power sources via deposited metal (Cr/Au) drain and source contacts insulated from the electrolyte MC channel through an insulator layer (e.g., thin Al_2O_3 film), as shown in Fig. <ref>(a). A bio-recognition layer is incorporated onto the surface of graphene, comprising receptors that interact with ligands through the LR binding process. A DC potential, denoted as V_ref, is applied to the electrolyte to determine the operating point. This receiver architecture has been previously implemented by our group and its fabrication methodology was detailed in <cit.>. In this architecture, binding of charged ligands to the receptors attached uniformly to the graphene surface results in the modulation of the charge carrier density of the transducer channel, i.e., graphene, through electric field effect. This change in charge carrier density modulates the conductance of the channel, and hence the drain-to-source current (Δ I_ds) of the receiver. Therefore, the alteration in Δ I_ds under constant drain-to-source bias (V_ds) becomes a function of the number of bound ligands (which is equal to the number of bound receptors N_b(t)) and the electrical charge of the bound ligands. Therefore, the bound ligands can be considered functionally equivalent to the gate of the transistor.
In conventional FETs, the effect of gate potential on Δ I_ds is quantified through the transconductance of the transistor, denoted by G_m. Likewise, the impact of ligands bound to receptors on Δ I_ds can be measured through transconductance. Therefore, transconductance plays a vital role in shaping the input-output relationship of the MC receiver, and the frequency-domain representation of the transconductance, denoted as G_m(f), becomes a key part of the transfer function of the MC receiver, referred to as H_t(f). Nevertheless, there is an additional component that contributes to H_t(f), which will be further explained.
To obtain the G_m(f), we can use a small-signal model, which is an AC equivalent circuit that approximates the nonlinear behavior of the device with linear elements. In this study, we build on the small-signal model developed in <cit.> for graphene solution-gated FETs, used as a neural interface, to obtain the input-output relation in frequency-domain with the input being the time-varying number of bound receptors and the output being the drain-to-source current. Fig. <ref>(a) presents the schematic of the MC receiver combined with the equivalent circuit to depict physical origin of each element, and Fig. <ref>(b) demonstrates the small-signal model of the MC receiver.
We start modeling the MC receiver by investigating solid-liquid interface behavior. The interface of a charged surface and an electrolyte is commonly referred to as an electrical double layer (EDL) <cit.>. The electrons on the charged surface and the ions on the electrolyte are separated by a single layer of solvent molecules that stick to the charged surface and act as a dielectric in a conventional capacitor. Hence, EDL properties are generally modeled as a capacitor in the literature <cit.>. In this model, however, the graphene-electrolyte interface is described as a constant phase element (CPE) (i.e., CPE_g-e) rather than an ideal capacitor to precisely characterize the response of the EDL in graphene bioFETs. The CPE behavior of the graphene-electrolyte interface stems from the presence of charged impurities on the substrate and structural imperfections within the graphene lattice, potentially resulting in a non-uniform local density of states (DOS) <cit.>.
The impedance of a CPE can be written as follows <cit.>:
Z=1/Q_0(j2π f)^α,
where Q_0 is the admittance at f=1/2π Hz and α is a parameter that determines the phase angle. The values of both Q_0 and α depend on the applied voltage (which results from the bound charged ligands) and reflect the properties of the graphene-electrolyte interface. A CPE with α=1 behaves like an ideal capacitor, while a CPE with α=0 behaves like a pure resistor. A CPE with 0<α<1 represents an imperfect capacitor that has a non-constant capacitance value. The capacitance of a CPE (i.e., C_CPE) can be calculated by equating the imaginary part of the impedance of CPE to the impedance of an ideal capacitor, as proposed by Hsu et al. <cit.>. This approach yields the following expression for the capacitance of a CPE:
C_CPE=Q_0/(2π f) ^1-αexp(jπ/2(α -1)).
The bio-recognition layer can be modeled as a charged capacitor according to Xu et al. <cit.>. This represents the double-layer capacitance between a single ligand and electrolyte. In this model, the ligand-electrolyte interface is described as a CPE, which is denoted as CPE_l-e, with α=1 to mimic the behavior of an ideal capacitor and ensure notational consistency within the overall model.
When charged ligands bind to receptors, they generate a small signal variation in the gate potential. This variation is transduced into a voltage at the graphene-electrolyte interface (V_int).
A current source element V_intG_m(f) is used to model the conversion of AC signals at the gate into AC signals in the drain current (I_ds), where G_m(f) is the transconductance of the bioFET. To account for the DC current flowing through the graphene bioFET caused by the reference voltage (V_ref), a resistive element R_ds–DC is employed in the model. However, during small signal analysis, when V_ref is set to zero, the R_ds-DC is removed from the small signal model depicted in Fig <ref>(b). To account for parasitic capacitances in the device that arise as a result of the coupling between electrolyte and the contact metals through the insulating layer, another CPE, CPE_par, is included in parallel with CPE_g–e. As it will be revealed, this CPE affects the high-frequency response of the bioFET. Using this equivalent circuit, we can obtain the frequency-domain representation of transconductance as follows <cit.>:
G_m(f)=dI_ds/dV_int|_v_ds + G_m,eff.
The derivative term on the RHS of (<ref>) represents the intrinsic transconductance, which is the change in the drain current with respect to the interface potential. The intrinsic transconductance depends on the interface capacitance between the graphene and the electrolyte (CPE_g-e), which reflects the charge accumulation at the interface. This relationship is given by
dI_ds/dV_int|_v_ds = V_dsw_g/l_gμ_g C_CPE_g-e,
where w_g and l_g represent the width and length of the graphene transduction channel, respectively, and μ_g denotes the charge carrier mobility of graphene <cit.>. The interface capacitance (i.e., C_CPE_g-e) can be obtained using (<ref>) to have a frequency-dependent relation for the intrinsic transconductance.
An additional term, G_m,eff, contributes positively to the gain of the transduction process at high frequencies. The interface capacitances C_CPE_g-e and C_CPE_par lead to a direct capacitive current between the gate and the graphene bioFET contacts, which is evenly distributed to the drain and source. This contribution, independent of field-effect coupling, can be regarded as an effective transconductance term. As shown in Fig. <ref>, which plots the magnitude of | G_m(f) | over a range of frequencies for MC receiver, the capacitive contribution to the drain current dominates the frequency response beyond a certain frequency threshold. This contribution can be expressed as <cit.>:
G_m,eff(f)=1/(2Z_CPE_g-e)+1/(2Z_CPE_par).
By explicitly incorporating the frequency dependence in (<ref>) for (<ref>), the second term in (<ref>) can be derived as follows:
G_m,eff(f) = Q_g-e+(2π f)^α _g-ee^jπ/2α _g-e
+Q_par(2π f)^α _pare^jπ/2α _par.
By combining (<ref>) with (<ref>), and (<ref>), the frequency-dependent transconductance of a MC receiver can be expressed as:
G_m(f) = ± V_dsw_g/l_gμ_gQ_g-e/(2π f)^1-α_g-ee^jπ/2(α_g-e-1)
+Q_g-e(2π f)^α _g-ee^jπ/2α _g-e+Q_par(2π f)^α _pare^jπ/2α _par.
The equation above uses the ± sign, which is positive in the electron conduction regime, and negative in the hole conduction regime of the bioFET. In this study, we focused on the hole conduction regime when plotting | G_m(f) | and conducting the simulations. In Fig. <ref>, two distinct response regimes can be identified: (i) a CPE dominant regime (up to 1 kHz), and (ii) a Z_CPE current regime where G_m increases due to capacitive currents (above 1 kHz).
To obtain the transfer function of the bioFET-based MC receiver, i.e., H_t(f), in addition to G_m(f), we need to derive the potential created by a single ligand on the graphene surface (V_int). As it will be revealed, by including V_int(f) in the H_t(f), we will be able to derive the spectral density of the output current, i.e., I_m(f), through end-to-end transfer function. The effective charge on the graphene surface resulting from binding of each ligand to the receptor is determined by the expression Q_m=q_effN_e^-, where N_e^- denotes the average number of free electrons per ligand. The mean effective charge, q_eff, represents the charge that a single electron of a ligand can generate on the graphene surface in the presence of ionic screening in the medium. The relationship is given by q_eff= q ×exp(-r/λ_D) where q is the elementary charge and r represents the average distance between the ligand electrons in the bound state and the surface of the transducer. It is assumed that this average distance is equivalent to the average length of the surface receptor in the bound state <cit.>. The Debye length, λ_D, characterizes the ionic strength of the medium, and is given by λ_D=√((ϵ_Mk_BT)/(2 N_Aq^2c_ion)), where ϵ_M is the dielectric permittivity of the medium, k_B is Boltzmann’s constant, T is the temperature, N_A is Avogadro’s number, and c_ion is the ionic concentration of the medium <cit.>. Finally, the interface potential generated by the charge accumulated on the surface by a single ligand is as follows <cit.>:
V_int(f)= Q_m/C_CPE_eq,
where C_CPE_eq is the equivalent capacitance of the transducer, which is comprised of a parallel combination of CPE_l-e, CPE_g-e, and CPE_par connected in series with another parallel pair of CPE_g-e and CPE_par as shown in Fig. <ref>(b). This can be expressed as:
C_CPE_eq= (1/C_CPE_l-e+C_CPE_g-e+C_CPE_par.
+ . 1/C_CPE_g-e+C_CPE_par)^-1,
where frequency-dependent relation of all C_CPE terms can be obtained by utilizing (<ref>). Therefore, the transfer function of the transduction process in a graphene bioFET-based MC receiver, i.e., H_t(f), can be written by using (<ref>) and (<ref>) as
H_t(f) = V_int(f) G_m(f).
§.§ End-to-End Transfer Function and Output Current Spectral density
The end-to-end transfer function of a microfluidic MC channel with graphene bioFET-based receiver can be expressed using Equations (<ref>), (<ref>), and (<ref>) as follows
H(f) = H_p(f) × H_lr(f) × H_t(f)
= V_int(f) G_m(f) (k_+ N_r/k_- + j 2 π f) e^-((2π f)^2D/u^3+j2π f/u)x_r.
Spectral density of the output current can be obtained by using the end-to-end transfer function and the spectral density of the input concentration signal as:
I_m(f) = H(f) ×Φ_in(f).
§ NUMERICAL RESULTS
In this section, we present the numerical results obtained using the developed analytical frequency-domain model, which is validated through particle-based simulations under various settings. The default values for the parameters used in the analyses are provided in Table <ref>. The admittance and phase angle values for CPE_g-e and CPE_par are extracted from the experimentally fitted data in <cit.>, conducted in an electrolyte medium with an ionic strength of 0.5 mM. We considered the same ionic strength (c_ion= 0.5 mM). As for the admittance of CPE_l-e, it is assigned considering the fact that the area of the double-layer interface between ligands and electrolyte is significantly smaller compared to the double-layer surface at the graphene-electrolyte interface. Consequently, based on the parallel plate capacitor formula C=εA/d, where ε is permittivity, and d is the distance between the surface layers (a single layer of molecules in this case), it is evident that the capacitive behavior of CPE_l-e are significantly lower compared to CPE_g-e since the values of A and d remain the same for both interfaces. We set μ_g=200 cm^2/Vs as reported in <cit.>. Aptamers are utilized as the receptors, and their default length is defined as 2 nm <cit.>. Binding and unbinding rates, k_+ and k_- are set considering the assumption of (<ref>) and accepted values in the MC literature <cit.>. We consider the microfluidic channel with a cross-sectional height of h_ch= 3 μm, a width of w_ch= 3 μm, and a length of l_ch= 200 μm, resulting in a laminar and steady flow. The simulations were performed using Smoldyn, a particle-based spatial stochastic simulation framework that offers high spatiotemporal resolution by simulating each molecule of interest individually <cit.>. This approach captures the inherent stochasticity of molecular transport and reactions and provides nanometer-scale spatial resolution.
The simulation setup consisted of a straight microfluidic channel with a rectangular cross-section, as shown in Fig. <ref>. The receptor molecules were immobilized at the channel bed, representing the 2d MC receiver. An input rectangular pulse signal composed of ligands was introduced at the inlet of the channel as shown in Fig. <ref>(a). These ligands propagated towards the receptors through convection and diffusion as depicted in Fig. <ref>(b). A fraction of the ligands randomly bound to the receptor molecules for varying durations, depending on their kinetic interaction rates. Subsequently, they unbound and continued their propagation until they exited the channel at the outlet, as demonstrated in Fig. <ref>(c).
To validate the model in both time and frequency domains, we evaluated the transfer function of the propagation channel, the transfer function of the LR binding process, and the end-to-end frequency-domain model under varying system parameters. This evaluation was conducted using both analytical expressions and simulation results. The particle-based simulation does not incorporate the transfer function of the MC receiver. Therefore, the numerical results for H_t(f) are solely obtained using analytical expressions. Moreover, we calculated the sampling frequency utilizing a numerical method for varying system parameters.
§.§ Propagation Channel
§.§.§ Effect of Varying Pulse Width
The first analysis investigates the impact of varying pulse width, T_p, a critical parameter commonly employed in signal generation and modulation schemes, such as pulse width modulation (PWM) <cit.>. The results of this analysis are presented in Fig. <ref>. As expected, an increase in pulse width leads to a higher concentration value in time domain, as shown in Fig. <ref>(a). In the frequency domain (Fig. <ref>(b)), a higher amplitude is observed in the spectral density of the received concentration, Φ_r(f), as the pulse width increases. Moreover, the cutoff frequency decreases with the increasing pulse width, which is consistent with the expectations based on Equations (<ref>) and (<ref>). The analytical model exhibits a high degree of agreement with the simulation results. It is important to note that as pulse width increases, the likelihood of inter-symbol interference also rises, leading to potential challenges in the signal recovery process.
§.§.§ Effect of Varying Diffusion Coefficient
We also analyze the impact of varying diffusion coefficient of ligands, D, on the response of the MC system. The diffusion coefficient is a fundamental parameter in MC systems, as it determines the rate at which molecules disperse through the medium. Molecules with a higher diffusion coefficient disperse more, resulting in a broader received signal width. Additionally, the peak received concentration decreases due to the higher dispersion, as shown in Fig. <ref>(a). On the other hand, in the frequency domain, increasing the diffusion coefficient is reflected in a slight decrease in the cutoff frequency of the spectral density of the received concentration, Φ_r(f), for both analytical and simulation results, as demonstrated in Fig. <ref>(b).
§.§ Ligand-Receptor Binding Process
§.§.§ Effect of Varying Binding Rate
We investigate the effect of varying binding rates, k_+, on the time-varying number of bound receptors in both time and frequency domains. Fig. <ref>(a) demonstrates that an increase in the binding rate directly corresponds to an increased number of bound receptors, N_b(t), as molecules with higher binding rate exhibit a higher propensity to bind to the receptors when they are in close proximity of each other. Similarly, as expected from Equation (<ref>), the frequency domain analysis reveals a higher amplitude in the spectral density of the number of bound receptors, N_b(f), when binding rates are increased, as shown in Fig. <ref>(b).
§.§.§ Effect of Varying Unbinding Rate
We also investigate the impact of varying unbinding rates, k_-, on the number of bound receptors. Contrary to the effect of binding rates, increasing the unbinding rate leads to a decrease in the number of bound receptors in the time-domain, N_b(t), as shown in <ref>(a). Molecules with higher unbinding rate have shorter bound state durations. In the frequency domain, as shown in Fig. <ref>(b), the unbinding rate exhibits an inverse relationship with the spectral density, as described by (<ref>). Consequently, a higher unbinding rate results in a lower amplitude in the spectral density of bound receptors, N_b(f), a finding supported by both simulation and analytical results.
§.§ End-to-End Model
§.§.§ Effect of Varying Pulse Width
To evaluate the end-to-end model's accuracy and investigate the impact of varying pulse widths, T_p, we analyze the spectral density of output current, I_m(f), for three pulse signals with different pulse widths but identical amplitudes, i.e., concentrations, as shown in Fig. <ref>(a). As predicted in Section <ref>, an increase in pulse width corresponds to a higher amplitude in I_m(f). This phenomenon occurs because a wider ligand pulse results in a higher concentration of ligands in the vicinity of the receiver's receptors. This, in turn, increases the probability of binding to a receptor before the already bound ones dissociate, resulting in a higher number of observed bound receptors, i.e., amplitude. The analytical expression for the output current spectrum, represented by (<ref>) and incorporating the transfer function of the three main processes and the input signal concentration, demonstrates high accuracy when compared to the simulation results.
§.§.§ Effect of Varying Ligand Concentration
We also evaluate the impact of varying ligand concentrations, C_m, on the end-to-end model by performing simulations with input concentration pulses with different concentrations but identical pulse widths. By analyzing the spectral density of the resulting output current, I_m(f), we observe that the amplitude of I_m(f) increases as concentration increases, as depicted in Fig. <ref>(b). The simulation results are strongly aligned with the analytical results obtained from (<ref>).
§.§ Sampling Frequency
To reconstruct the input concentration signal, ϕ_in(t), from the sampled sequence of the number of bound receptors, it is essential to employ an appropriate sampling frequency. Considering that both the input concentration spectral density and the end-to-end transfer function and consequently the resulting output current spectral density, display a Lorentzian-shaped profile, it is essential to determine the cutoff frequency that contains most of the spectrum energy. The energy of the output current spectral density within a bandwidth ranging from 0 Hz to the cutoff frequency can be quantified as follows <cit.>:
∫^f_c_0 |H(f) Φ_in(f)|^2 df =η∫^+ ∞_0 |H(f) Φ_in(f)|^2 df,
where f_c is the cutoff frequency and η is the fraction of the total spectrum energy contained within the interval (0,f_c). In this study, we consider η = 0.99, which indicates that 99% of the spectral power is contained within the specified bandwidth. Once the cutoff frequency has been determined, the sampling frequency can be obtained using the Nyquist–Shannon theorem, which states that in order to achieve a reconstruction that captures all the information, the sampling frequency should be greater than twice the bandwidth:
2f_c≤ f_s≤∞.
Fig. <ref> shows the sampling frequency obtained from (<ref>), which is a function of pulse width T_p, diffusion coefficient D, and flow velocity u. Fig. <ref>(a) demonstrates that increasing the pulse width results in a lower sampling frequency required to reconstruct the original continuous signal. As shown in Fig. <ref>(b), the spectral density of a wider pulse signal has a lower cutoff frequency. Therefore, it is expected that increasing the pulse width would allow a lower sampling frequency.
Fig. <ref>(b) indicates that the sampling frequency decreases as the diffusion coefficient increases. This can be attributed to the reduction in the cutoff frequency while increasing the diffusion coefficient, as shown in Fig. <ref>(b). Therefore, the decrease in sampling frequency aligns with the expectations set by the Nyquist-Shannon theorem.
Finally, Fig. <ref>(c) shows the impact of increasing flow velocity on the sampling frequency. As the flow velocity increases, the signals traverse the receiver position more quickly, which reduces the time window available for capturing an adequate number of samples from the propagating signal. Consequently, to guarantee the collection of a sufficient number of samples, it is necessary to raise the sampling frequency in response to an increase in flow velocity.
§ CONCLUSION
In this study, we introduced a comprehensive end-to-end frequency-domain model for a practical microfluidic MC system with a graphene bioFET-based receiver. The model provides valuable insights into the dispersion and distortion of received signals, and has the potential to inform the design of new frequency-domain MC techniques, such as modulation and detection, matched filters, and interference-free receiver architectures. The end-to-end transfer function, denoted as H(f), incorporates the input-output relationships of three sequential modules: the microfluidic propagation channel, the LR binding process, and the graphene bioFET-based receiver. The accuracy and reliability of the developed model were verified through particle-based spatial stochastic simulations, which demonstrated a high degree of agreement with the analytical expressions.
IEEEtran
|
http://arxiv.org/abs/2307.04397v1 | 20230710075924 | On Estimating Derivatives of Input Signals in Biochemistry | [
"Mathieu Hemery",
"François Fages"
] | q-bio.QM | [
"q-bio.QM",
"q-bio.MN"
] |
Inria Saclay, Lifeware project-team, Palaiseau, France
[email protected] [email protected]
On Estimating Derivatives of Input Signals in Biochemistry
Mathieu Hemery and François Fages
July 8, 2023
==========================================================
The online estimation of the derivative of an input signal is widespread in control
theory and engineering. In the realm of chemical reaction networks (CRN), this raises
however a number of specific issues on the different ways to achieve it. A CRN pattern
for implementing a derivative block has already been proposed for the PID control of
biochemical processes, and proved correct using Tikhonov's limit theorem. In this
paper, we give a detailed mathematical analysis of that CRN, thus clarifying the
computed quantity and quantifying the error done as a function of the reaction kinetic
parameters. In a synthetic biology perspective, we show how this can be used to design error correcting terms
to compute online functions involving derivatives with CRNs.
In the systems biology perspective, we give the list of models in BioModels containing (in the sense of subgraph
epimorphisms) the core derivative CRN,
most of which being models of oscillators and control systems in the cell,
and discuss in detail two such examples: one model of the circadian clock and one model of a bistable switch.
§ INTRODUCTION
Sensing the presence of molecular compounds in a cell compartment is a necessary task of
living cells to maintain themselves in their environment, and achieve high-level functions
as the result of low-level processes of basic biomolecular interactions. The formalism of
chemical reaction networks (CRN) <cit.> is both a useful abstraction to
describe such complex systems in the perspective of systems biology
<cit.>, and a possible molecular programming language in the perspective
of synthetic biology <cit.>.
Sensing the concentration levels of molecular compounds has been well-studied in the
domain of signal transduction networks. For instance, the ubiquitous CRN structure of
MAPK signaling networks has been shown to provide a way to implement analog-digital
converters in our cells, by transforming a continuous input signal, such as the
concentration of an external hormone activating membrane receptors, into an almost
all-or-nothing output signal according to some threshold value of the input, i.e. using a
stiff sigmoid as dose-response input-output function <cit.>.
The analysis of input/output functions fits well with the computational theory of CRNs. In particular, the
Turing-completeness result shown in <cit.> for the interpretation by Ordinary Differential Equations (ODE) of CRNs,
possibly restricted to elementary CRNs using mass-action law kinetics and
at most bimolecular reactions, demonstrates the generality of this
approach to biomolecular programming. Furthermore, it comes with an algorithm to automatically generate a finite CRN for
implementing any computable real function. Such a compiler is implemented
in our CRN modeling software BIOCHAM <cit.> in several forms, including a
theoretically more limited but practically more interesting framework for robust online computation <cit.>.
Sensing the derivative of an input molecular concentration is nevertheless beyond the scope
of this computational paradigm since it assumes that the input molecular concentrations are stabilized
at some fixed values which makes no sense for computing the derivative.
Furthermore, it is well-known that the derivative of a computable real function is not
necessarily computable <cit.>. We must thus content ourselves with
estimating the derivative of an input with some error, instead of
computing it with arbitrary precision as computability theory requires.
In control theory and engineering, online estimations of input signal derivatives are
used in many places. Proportional Integral Derivative (PID) controllers adjust a target
variable to some desired value by monitoring three components: the error, that is the
difference between the current value and the target, its
integral over a past time slice, and its current derivative. The derivative term can
improve the performance of the controller by avoiding overshoots and solving some
problematic cases of instability.
Following early work on the General Purpose Analog Computer (GPAC) <cit.>,
the integral terms can be implemented with CRNs using simple
catalytic synthesis reactions such as A → A+B for integrating A over time,
indeed B(T)=∫_O^T A(t) dt. Difference terms can be implemented using the
annihilation reaction A_+ + A_-→∅ which is also used in
<cit.> to encode negative values by the difference of two molecular
concentrations, i.e. dual-rail encoding.
This is at the basis of the CRN implementations of, for instance, antithetic PI
controllers presented in <cit.>.
For the CRN implementation of PID controllers, to the best of our knowledge three different CRN templates have been
proposed to estimate derivative terms. The first one by Chevalier & al.
<cit.> is inspired by bacteria's chemotaxis, but relies on strong restrictions upon
the parameters and the structure of the input function making it apparently limited in
scope.
A second one proposed by Alexis & al.
<cit.> uses tools from signal theory
to design a derivative circuit with offset coding of negative values
and to provide analytic expressions for its response.
The third one developed by Whitby & al. <cit.> is practically similar in its
functioning to the one we study here, differing only on minor implementation details,
and proven correct through Tikhonov's limit theorem.
This result ensures that when
the appropriate kinetic rates tend to infinity, the output is precisely the derivative of
the input.
In this paper, we give a detailed mathematical analysis of that third derivative CRN and quantify the
error done as a function of the reaction kinetic parameters, by providing a first-order
correction term.
We illustrate the precision of this analysis on several examples,
and show how this estimation of the derivative can be actively used with error-correcting terms to compute elementary mathematical
functions online.
Furthermore, we compare our core derivative CRN to the CRN models in the curated part of <BioModels.net> model repository.
For this, we use the theory of subgraph epimorphisms (SEPI)
<cit.> and its implementation in BIOCHAM <cit.>,
to identify the models in BioModels which contain the derivative CRN structure.
We discuss with some details the SEPIs found on two such models:
,
one of the smallest eukaryotes circadian clock model <cit.>,
and
, a model of the bistable switch at the restriction point of the cell cycle <cit.>.
The rest of the article is organized as follow. In Section <ref>, we provide some
preliminaries on CRNs and their interpretation by ODEs. We present the core differentiation CRN in
Section <ref>, in terms of both of some of its different possible biological
interpretations, and of its mathematical properties.
Section <ref> develops the mathematical analysis to bound the error done by that core CRN,
and give in Section <ref> some examples to test the validity of our estimation
and the possibility to introduce error-correcting terms.
Section <ref> is then devoted to the search of that derivative CRN pattern
in BioModels repository and the analysis of those matching in two cases.
Finally, we conclude on the perspectives of our approach to both CRN design at an abstract mathematical level,
and comparison to natural CRNs to help understanding their functions.
§ PRELIMINARIES ON CRNS
§.§ Reactions and Equations
The CRN formalism allows us to represent the molecular interactions that occur on a finite set
of molecular compounds or species, {X_i}_i ∈ 1 … n, through a finite set of
formal (bio)chemical reactions, without prejudging their interpretation
in the differential, stochastic, Petri Net and Boolean semantics hierarchy <cit.>.
Each reaction is a triplet
(R,P,f), also written R P,
where R and P are multisets of respectively reactant and product species in
{X_i}, and f:_+^n ↦_+ is a kinetic rate function of the reactant species.
A CRN is thus entirely described by the two sets of n species and m reactions:
{X_i},{R_s P_s}.
The differential semantics of a CRN associates positive real valued molecular concentrations,
also noted X_i by abuse of notation,
and the following ODEs which define the time evolution of those concentrations:
d X_i/dt = ∑_s ∈ S (P_s(X_i) - R_s(X_i)) f_s(X),
where P_s(X_i) (resp. R_s(X_i)) denotes the multiplicity (stoichiometry) of X_i in the multiset of products
(resp. reactants) of reaction s.
In the case of a mass action law kinetics,
the rate function is a monomial, f_s = k_s ∏_x ∈ R_s x,
composed of the product of the concentrations of the reactants by some positive constant k_s.
If all reactions have mass action law kinetics, we write the rate constant in place of the rate function R P,
and the differential semantics of the CRN is defined by a
Polynomial Ordinary Differential Equation (PODE).
From the point of view of the computational theory of CRNs, there is no loss of generality
to restrict ourselves to elementary CRNs composed of at most bimolecular reactions with
mass action law kinetics. Indeed, <cit.> shows that any computable real
functions (in the sense of computable analysis, i.e. with arbitrary finite precision by a
Turing machine), can be computed by such a CRN, using the dual-rail encoding of real
values by the difference of molecular concentrations, x=X_+-X_-. While our compiler
ensures that the quantity X_+-X_- behaves properly, it is also important to degrade
both of them with an annihilation reaction, X_+ + X_- ∅,
to avoid a spurious increase of their concentration.
Those annihilation reactions are supposed to be faster than the other reactions of the CRN.
The first example given in <cit.> showed the compilation of the cosine function of
time, y=cos(t) in the following CRN:
A_p → A_p+y_p
A_m → A_m+y_m A_m(0)=0, A_p(0)=0
y_m → A_p+y_m
y_p → A_m+y_p y_m(0)=0, y_p(0)=1
y_m+y_p ∅
A_m+A_p ∅
The last two reactions are necessary to avoid an exponential increase of the species concentration.
The associated PODE is:
d(A_m)/dt = y_p-fast*A_m*A_p A_m(0) =0
d(A_p)/dt = y_m-fast*A_m*A_p A_p(0) =0
d(y_m)/dt = A_m-fast*y_m*y_p y_m(0) =0
d(y_p)/dt = A_p-fast*y_m*y_p y_p(0) =1
§.§ CRN Computational Frameworks
The notions of CRN computation proposed in <cit.> and <cit.>
for computing input/ouput functions, do not provide however
a suitable framework for computing derivative functions.
Both rely on a computation at
the limit, meaning that the output converges to the result of the computation whenever the CRN is either properly
initialized <cit.>, or the inputs are stable for a sufficient period of
time <cit.>. To compute a derivative, we cannot ask that the input stay fixed for
any period of time as this would imply a null derivative. We want the output to follow
« at run time » the derivative of the input.
Our question is thus as follows. Given an input species X following a time course
imposed by the environment X(t), is it possible to perform an online computation such
that we can approximate the derivative dX/dt on the concentration of 2 output
species using a dual-rail encoding?
The idea is to approximate the left derivative by getting back to its very mathematical
definition:
dX/dt(t) = lim_ϵ→ 0^+X(t)-X(t-ϵ)/ϵ,
but how can we measure X(t-ϵ)?
§ DIFFERENTIATION CRN
§.§ Biological intuition using a membrane
One biological intuition we may have to measure a value in a previous time is to use a
membrane with a fast diffusive constant. Indeed, if we suppose that the input is the
outside species, the inside species equilibrates to follow the concentration of the
outside one (the input) but also suffers a lag due to the diffusion. Building upon
this simple trick leads to the CRN presented in Fig. <ref>.
As the derivative may be positive or negative, a dual-rail encoding is used for the derivative.
This CRN is
mainly equivalent to the derivative block proposed in <cit.> apart from the
fact that we suppose (for the sake of clarity) that the input stay positive and no dual-rail encoding is used for it.
In the case of a
dual-rail encoded input, the two species need to have the same permeability through
the membrane, otherwise the delay is not the same for the positive and
negative parts.
The delay is thus introduced through a membrane
under the assumption that the outside concentration is imposed by the environment. This
conveniently explains why the kinetic rates are the same for the two monomials in the
derivative of , but this is not mandatory.
Indeed two other settings can be used to construct such a CRN without relying on a
membrane. We could use a phosphorylation and a dephosphorylation reactions where
would be the phosphorylated species. Or we could, as in <cit.>, rely on a
catalytic production of by and a degradation reaction of . A drawback
of these two other implementations is that they need to be tuned to minimize the
difference between the rates of the two monomials in the derivative of . Otherwise
a proportional constant is introduced between and , and needs to be
corrected by adjusting the production rates of D_+ and D_-.
However, the membrane implementation also has its own drawback as it requires the reaction
→ + D_+ to occur through the membrane. We may think of a membrane
protein M that mediates this reaction (+ M → + M + D_+). Then, since
its concentration is constant, it can simply be wrap up in the kinetic constant of the reaction.
Which of this three implementations should be chosen may depend on the exact details of
the system to be build.
§.§ Core differentiation CRN
Our core differentiation CRN schematized in Fig. <ref>
is more precisely composed of the following 7 reactions:
+ D_+ + D_-
D_+ ∅
D_- ∅
D_+ + D_- ∅
The diffusion through the membrane is symmetrical with a constant and both activations
should have the same constant product k. while the degradation of the outputs should have a
rate k.
We make the assumption that the outside species is present in large
quantity so that its concentration is not affected by the dynamics of the CRN.
Under this assumption, the differential
semantics is then the same as the one of the differentiation CRN
proposed in <cit.>:
d/dt = k_diff ( - )
dD_+/dt = k k_diff - k D_+ - fast D_+ D_-
dD_-/dt = k k_diff - k D_- - fast D_+ D_-
The derivative is encoded as D = D_+ - D_- and hence obeys the equation
(using the two last lines of the previous equation):
dD/dt = dD_+/dt - dD_-/dt
= k k_diff ( - ) - k (D_+ - D_-)
dD/dt = k ( - /1/ - D )
In the next section, we prove that is equal to with a delay ϵ,
hence giving us our second time point X(t-ϵ),
up to the first order in
ϵ = 1/.
The fractional part of the last equation is thus precisely an
estimate of the derivative of as defined in Eq. <ref>, with a
finite value for ϵ.
It is also worth remarking that such derivative circuits can in principle be connected to
compute higher-order derivatives, with a dual-rail encoded input. It is
well known that such estimations of higher-order derivatives can be very sensitive to
noise and error, and are thus not reliable for precise computation but may be good enough
for biological purposes. We will see a biological example of this kind in
Section <ref> on a simple model of the circadian clock.
§ MATHEMATICAL ANALYSIS OF THE QUALITY OF THE ESTIMATION
Our first goal is to determine precisely the relation between and when the
later is enforced by the environment. Using the first line of Eq. <ref>, we
obtain by symbolic integration:
(t) = ∫_0^∞exp(- s) (t-s) ds,
where we can see that is the convolution of with a decreasing exponential.
This convolution is not without reminding the notion of evaluation in the theory of
distribution and has important properties of regularisation of the input function. In
particular, whatever the input function is, this ensures that the internal representation
is continuous and differentiable.
The interesting limit for us is when →∞, that is when ϵ =
1/→ 0. In this case, the exponential is neglectable except in a
neighbourhood of the current time and supposing that is infinitely
differentiable[We also explore in Figures <ref>D and
<ref>C what a non analyticity of imply for our model.],
we obtain by Taylor expansion:
(t) = ∫_0^∞exp(- s) ∑_n=0^∞(-s)^n/n!^(n)(t) ds
= ∑_n=0^∞/n!^(n)(t) ∫_0^∞ (-s)^n exp(- s)
ds
The integral may be evaluated separately using integration by parts and recursion:
I_n = ∫_0^∞ (-s)^n exp(- s) ds = -n ϵ I_n-1
= (-1)^n (ϵ)^n+1 n!
We thus have:
(t) = ∑_n=0^∞/n!^(n)(t) (-1)^n n! ϵ^n+1
= ∑_n (-ϵ)^n ^(n)(t)
= (t) - ϵ'(t) + ϵ^2 ”(t) + …
(t) = ( t - ϵ) + o(ϵ^2).
Using Taylor expansion once again in the last equation somehow formalizes
our intuition: the concentration of the internal species follows the time course
of the external one with a delay equal to the inverse of the diffusive constant .
This validates our formulation of the derivative.
Now, it is sufficient to remark that Eq. <ref> has exactly the same
form as the first line of Eq. <ref> that we just study in length. Just
replace by the estimation of the left derivative, by the output D and
the rate constant k instead of . The delay approximation is thus also possible in
this step and, introducing the delay τ = 1/k, we immediately obtain a precise
expression for D:
D(t) = (t-τ) - (t-ϵ-τ)/ϵ+o(ϵ)+o(τ^2).
We can see this as the secant approximation of the derivative of with a step
size ϵ and a delay τ. Moreover we also know that the residual error on this
expression are of first order in ϵ and second order in τ.
It is well known in the field of numerical computation that the secant method provides a
rather poor approximation, but it has the benefit to be the simplest one, and thus
gives here a small size derivative circuit.
In the hope of improving the precision, one could implement higher-order methods using
several "membranes" to access the value of the function on several time points
before performing the adapted computation.
Such complexation would however also increase the delay
between the input and output function.
§ VALIDATION ON SIMPLE EXAMPLES
§.§ Verification of the delay-approximation
In this first subsection, we want to validate the approximation expressed by
Eq. <ref>. For this, we focus on the diffusion part of our CRN:
↔. We make numerical simulation for 2 different values of
ϵ and 2 different input functions: a sine wave and an absolute value signals.
The second allowing us to see how well the delay approximation works in presence of
non analyticity.
Fig. <ref> shows the response of in that different condition. In
panel A, the kinetic constant is very low so we expect our approximation to
fail. Indeed, one can see that in addition to having an important delay, the output is strongly
smoothed, this tends to average the variation of the input, bringing back to the
average value of the input. In panel B the diffusion constant is
increased by a factor 10. The delay approximation is now very good and we only expect an
error of order ϵ^2 = 10^-2 which can be checked with good accuracy on panel
C. Panel D shows a case of a
non-differentiable function in which an error of order ϵ = 0.1 is visible shortly
after the discontinuity and vanishes in a similar timescale.
§.§ Approximation of the derivative
Let us now check the behaviour of the derivative circuit. On Fig. <ref>,
we can see the response of our derivative circuit for a sine wave and an absolute value
input functions. In panels A and B we see that when the first and second order
derivatives of the input are smaller than the kinetic reaction rates, the delay
approximation gives a very good picture of the response. On a complementary point of view,
the panel C shows that in front of singularity, the system adapts after an
exponential transient phase with a characteristic time τ = 1/k.
§.§ Using signal derivatives for online computations
Our main motivation for analyzing the differentiation CRN is to
compute a function f of some unknown input signal, (t), online.
that is, given a function f, compute a function f((t))
Yet the differentiation CRN only
allows us to approximate the derivative of the input signal.
The idea is thus to implement the PODE:
dY/dt = f'((t)) d /dt, Y(0)=f(X(0)
and provide the
result online on a set of internal species Y(t) = Y_+ - Y_-.
This necessitates to compute the function f' and estimate the derivative of the input.
Using the formalism developed in <cit.> we know that there
exist an elmentary CRN (i.e. quadratic PODE) computing f'() for any elementary function f and we just
have shown that d /dt can be approximated by the differentiation CRN.
Therefore, in principle, any elementary function of input signals can be approximated online by a CRN.
As a toy example, let us consider the square function, d Y/dt = 2 (D_+ - D_-),
and as input, a sine wave offset to stay positive : (t) = 1 + sin(t).
The CRN generated by BIOCHAM according to these principles, to compute the square of the input online is:
, ,
+ D_+ D_+ ∅
+ D_- D_- ∅
+ D_+ + D_+ + Y_+ + D_- + D_- + Y_-
D_+ + D_- ∅
Y_+ + Y_- ∅
The first three lines implement the derivative circuit, the fourth line implements
the derivative of Y and the last line provides the dual-rail encoding.
The numerical simulation of this CRN is depicted in Fig. <ref>A
One can see that while it effectively computes the square of the input, it also suffers
from a strong drift. To verify if this drift comes from the delay between the input and
the output, we can compute analytically the output of our network with our approximation of
derivative with a delay (see the full computation in Appendix).
y(t) = ∫ 2 x(s) x'(s-τ) ds
≃(1+sin(t))^2 + τ t.
This is precisely the behaviour that can be seen on the time course of Fig. <ref>
A. After the integration of 20 time units, the offset is of order 2 which is
exactly what is predicted for a delay τ = 1/k = 0.1. Therefore,
while it is always possible to get rid of such errors by increasing ,
the identification of the cause of the drift, gives us a potentially simpler path to eliminate it:
using a representation of the input that is itself delayed: ↔
X_delay, and use this delayed signal as the catalyst for the production of Y_+
and Y_- in the place of . This leads to the CRN given in Appendix
(Eq. <ref>) for which numerical integration shows in
Fig. <ref>B that we indeed have get rid of the drift, or said
otherwise, the correct implementation for online computation is given by:
dY/dt = f'((t-τ)) d/dt(t-τ),
where the delays has to be equal for the two pieces of the derivative.
§ BIOLOGICAL EXAMPLES
§.§ BioModels repository
To explore the possibility that natural biochemical systems already implement a form or another of
the core differentiation CRN, one can try to scan the CRN models of the BioModels
repository <cit.>.
This can be automated with the general graph matching notion of Subgraph EPImorphism (SEPI)
introduced in <cit.> to compare CRN models
and identify model reduction relationships based on their graph structures.
SEPI generalizes the classical notion of subgraph isomorphism by introducing an operation of node merging in addition to node deletion.
Considering two bipartite graphs of species and reactions, there exists a SEPI from G_A
to G_B if there exists a sequence of mergings[A species (resp. reaction) node can
only be merged with another species (resp. reaction) node and the resulting node inherits of all the
incoming and outcoming edges of the two nodes.] and deletions of nodes in G_A
such that the resulting graph is isomorphic to G_B.
More precisely, we used the SEPI detection algorithm of BIOCHAM to scan the
curated models in Biomodels (after automatic rewriting with well-formed reactions <cit.>)
and check the existence of a SEPI from each model graph to the differentiation CRN graph.
Fig. <ref> shows that our small differentiation CRN with 4 species is frequently found in large models.
It is thus reasonable to restrict to models with no more than 10 species.
Table <ref> lists the models with no more than 10 species in the 700 first models of BioModels
that contain our differentiation CRN.
The predominance of models exhibiting oscillatory dynamics, and in particular circadian clock models is striking.
§.§ Circadian clock
Model of the eukaryotes circadian clock proposed by Becker-Weimann & al.
<cit.> is among the smallest models of the circadian clock displaying a SEPI reduction
toward our differentiation CRN. Its influence graph is depicted in
Fig. <ref>A, we also display in red the first SEPI found by BIOCHAM,
and in green a second one obtain by enforcing the mapping from the PER/Cry
species inside the nucleus to the input of the differentiation CRN.
Interestingly, this model has the nucleus membrane separating the species
mapped to and the one mapped to in the second SEPI.
The oscillatory behavior of this model is shown in panel B.
Now, thinking at the mathematical insight that this relation provides, it is quite natural
for a CRN implementing an oscillator to evaluate its own derivative on the fly.
Actually, when looking at the natural symmetry of the model, we are inclined to think that this
CRN may actually be two interlocked CRNs of the derivative circuit, both computing
the derivative of the output of the other, as if a second order derivative circuit was
closed on itself.
This is something we could easily check by imposing restrictions on the
SEPI mapping. Enforcing the nucleus PerCRY protein to be mapped on gives us the
SEPI shown in green in Fig. <ref>A.
To validate the preservation of the function of the derivative CRN given by this SEPI,
we can verify that the quantities defined by summing the species
that are mapped together are effectively linked by the desired derivative relation. As
can be seen in Fig. <ref>B, the agreement is striking.
One can even note that the delay of the chemical derivative is the one predicted by our theory.
The case of Fig. <ref>C is more complex as this part of the model seems to compute the opposite of the derivative.
It is however worth noting that there is absolutely no degree
of freedom in our choice of the species used in Fig. <ref>B and
C that are entirely constrained by the SEPI given by BIOCHAM.
Taking both SEPI together we
see that Bmal1^nucleus_protein and
Bmal1^cytoplasm_mRNA play symmetrical roles, being the input and
derivative of the two displayed SEPI. Given that the second SEPI introduces a negative
sign, we may see this as:
Bmal1^cytoplasm_mRNA = d/dtBmal1^nucleus_protein
Bmal1^nucleus_protein = -d/dtBmal1^cytoplasm_mRNA
The solution of this well known equation are the sine and cosine functions, and this
perfectly fits the oscillatory behaviour of this CRN. To confirm this hypothesis, we check
for the presence of a SEPI from the clock model to the compiled cosine CRN presented in
Eq. <ref> which is effectively the case.
On the other hand, there
is no SEPI relation between the compiled cosine and the derivative circuit.
§.§ Bistable switch
The model of a bistable switch in the context of the restriction
point <cit.> displays a SEPI toward our derivative circuit. This model,
presented in Fig. <ref>A, study the Rb-E2F pathway as an example of
bistable switch where the presence of a (not modeled) growth factor activates the
MyC protein, starting the pathway until it reach the E2F factor that constitute the output
of the model. Yao & al. show that once E2F reachs a threshold, its activation becomes
self sustained hence the notion of switch.
The SEPI given by Biocham is worth of interest as it does not merge any species and only
three reactions into one leaving all the other either untouched or deleted, thus
indicating that the pattern of the derivative is already well present. Morevoer, MyC is
mapped to the input and E2F to one part of the output, reinforcing our intuition that the
discovered SEPI is closed from the natural functionning of the CRN.
To conform this, we run the simulation as provided by the models and display the
derivative of the MyC protein against a scaled difference of the D_+ and D_- species:
D = a RB - b E2F where a and b are positive constant adjusted so that D goes to
0 at final time and are of the same magnitude as d MyC/dt. (This gives a=6.3,
b=0.063.) Clearly, D is a delayed and smoothed version of the input derivative exactly
as our derivative device would provide.
§ CONCLUSION AND PERSPECTIVES
We have presented a mathematical analysis of the core differentiation CRN
introduced by Whitby & al. <cit.>. In particular, we have shown that what is
computed is an approximation of the left derivative given a small time in the past with a time
constant determined by the diffusion constant between the input and its internal
representation: ϵ = 1/. Moreover, there is a delay τ due to the
computation time that can also be precisely estimated given the rate of activation and
degradation of the species encoding the derivative: τ = 1/k.
We have shown that such results can be used in some cases to design error-correcting terms
and obtain excellent implementations of functions of input signals using an approximation of their derivative on the fly.
From a synthetic biology perspective, the derivative CRN may be very relevant in the context of biosensor design,
when the test is not be about the presence of some molecular compounds <cit.>
but on their variation.
A derivative CRN is also needed to construct PID
controllers. The derivative control is known for damping the oscillations around the target of the
controller but delays are also known for producing such oscillations.
Being able to determine and quantify those delays and errors is thus important to optimize the design.
This device may also be used to approximate the derivative of an unknown
external input in the context of online cellular computing. Once again, delay may produce
nefarious artefacts that can easily be avoided when aware of the problem.
Furthermore, using the notion of SEPI to scan the biomodels database, we were able
to highlight a certain number of CRN models that contain the core differentiation CRN.
A high number of these models occur in models presenting oscillations.
We have shown on one such example, a circadian clock model, why it makes sense for an oscillator to
sense its own derivative, and to reproduce what a mathematician would produce in a more direct way for the
most basic oscillatory function: sine and cosine.
§.§ Acknowledgment
This work benefited from ANR-20-CE48-0002 δifference project grant.
plain
§ APPENDIX: COMPUTATION OF INTEGRATION WITH A DELAY
To prove that the drift of the output is a direct consequence of the delay, we first
compute the input and the approximate derivative for our choice of input:
x(t) = 1+sin(t)
x'(t-τ) = cos(t-τ)
= cos(t) + τsin(t) + o(τ^2)
Then we can compute the output up to the first order:
y(t) = ∫ 2 x(s) x'(s-τ) ds
= ∫ 2 (1+sin(s)) cos(s) ds + ∫ 2 τ (sin(s)+sin^2(s)) ds
= (1+sin(t))^2 + 2 τ∫sin(s)+sin^2(s) ds
y(t) ≃(1+sin(t))^2 + τ t
Then, to correct the observed drift, we propose to introduce a delay signal and use it in the
computation to produce the output species Y_+ and Y_-, with the following CRN:
, ,
+ , ∅,
+ D_+ D_+ ∅
+ D_- D_- ∅
+ D_+ + D_+ + Y_+ + D_- + D_- + Y_-
D_+ + D_- ∅
Y_+ + Y_- ∅
|
http://arxiv.org/abs/2307.06289v1 | 20230712163639 | Eigenvalue sensitivity from eigenstate geometry near and beyond arbitrary-order exceptional points | [
"Henning Schomerus"
] | quant-ph | [
"quant-ph",
"cond-mat.other",
"physics.optics"
] |
Department of Physics, Lancaster University, Lancaster, LA1 4YB, United Kingdom
Systems with an effective non-Hermitian Hamiltonian display an enhanced sensitivity to parametric and dynamic perturbations.
I derive a general and exact algebraic expression for this sensitivity that retains a simple asymptotic behaviour close to exceptional points (EPs) of any order, while capturing the role of additional states in the system. This reveals that such states can have a direct effect even if they are spectrally well separated.
The employed algebraic approach, which follows the eigenvectors-from-eigenvalues school of thought, also provides direct insights into the geometry of the states near an EP. In particular, I show that the condition number quantifying the sensitivity follows a striking equipartition principle in the quasi-degenerate subspace.
Eigenvalue sensitivity from eigenstate geometry
near and beyond arbitrary-order exceptional points
Henning Schomerus
August 12, 2023
==================================================================================================
§ INTRODUCTION
From quantum mechanics to classical physics, linear algebra has become such a central tool for the description of physical systems that it seems to hardly hold any new surprises.
Experience informs us about the utility of eigenvalues and eigenstates, whose calculation appears to be challenging only when one has to deal with very large systems.
An interesting mathematical complication arises in effective descriptions that result in non-orthogonal eigenstates, as is common in open quantum systems and classical systems with gain and loss, where the effective Hamiltonian becomes non-Hermitian <cit.>. A full characterization of each state then has to involve two variants of it, the right eigenstate |R_i⟩ and the left eigenstate ⟨ L_i| of any eigenvalue ω_i. As long as eigenvalues are nondegenerate, these two sets form a biorthogonal system that then can be used to diagonalize the system <cit.>.
This situation becomes accentuated at generic eigenvalue degeneracies, so-called exceptional points (EPs), which lead to a defective system that cannot be diagonalized anymore <cit.>. A physical signature of these EPs is a drastically altered sensitivity of the system to perturbations, both statically <cit.> as well as dynamically <cit.>. The non-orthogonality itself enhances this sensitivity already for spectrally isolated states, which can be quantified by a number of equivalent quantities, such as the phase rigidity
r_i=⟨ L_i|R_i⟩/√(⟨ L_i|L_i⟩⟨ R_i|R_i⟩),
which naturally appears in the characterization of the flux nonconservation <cit.>, or the Petermann factor
K_i=|r_i|^-2,
which naturally appears in the evaluation of quantum noise <cit.>, as well as in classical dynamical response theory <cit.>. Approaching an EP, the phase rigidity tends to zero, and the diverging Petermann factor signifies the abovementioned drastic change of the response to perturbations. As EPs furthermore lead to highly complex structures in parameter space <cit.>, a universal and detailed characterization is challenging, which singles them out as an active field of study.
On the mathematical side, the very same quantity r_i is know as the eigenvalue condition number, which features practically, e.g., as a measure of accuracy of numerical diagonalization algorithms <cit.>.
The concrete expression (<ref>) then reveals a deep relation between eigenvectors and spectral properties—even though the latter are, in principle, basis invariant.
In separate developments, even more concrete eigenvector-eigenvalue relations are receiving considerable mathematical attention. This follows the realization that such relations can provide deeply surprising and general insights, even into normal systems with orthogonal eigenvectors. While variants of such relations have appeared in many specific contexts, their comprehensive landscape was only fully appreciated very recently, as is beautifully surveyed in Ref. <cit.>. As this reference hints at various points, the underlying ideas also transfer to non-normal systems, which is the general mathematical feature that renders eigenvectors non-orthogonal.
In this work I combine both themes to provide an exact algebraic reformulation of the phase rigidity,
r_i=A_i^-1p'(ω_i),
which reveals its spectral content via the characteristic polynomial p(ω), while A_i is a regular geometric factor obtained from the adjugate matrix, as explicitly specified in Eqs. (<ref>) and (<ref>).
This reformulation allows us to extract valuable information about the properties of a system both close to as well as away from EPs of arbitrary order, including combinations where some of the eigenvalues are quasi-degenerate and others are not, but the additional eigenstates still have a finite overlap with the quasi-degenerate subspace.
This reveals that such additional states can have a direct effect even if they are spectrally well separated, and captures this precisely—realizing the beyond part in the title.
On the other hand, in the limit of a system where all states participate in the EP, we recover a recently derived compact result <cit.>—realizing the near part in the title.
Furthermore, the underlying algebraic features allow us to extract additional geometric information about the states near the EP, revealing a striking equipartition property of the contributions from the different directions in the quasi-degenerate subspace.
These features directly transfer to the response of physical systems to parametric and dynamic perturbations, including quantum and classical noise, and determine the function of these systems as sensors.
These results are presented along the following lines. Section <ref> describes the motivation and main result of this work, and previews its implications for asymptotics near EPs.
This discussion is further prepared in Section <ref>, which describes the asymptotic behaviour of the characteristic polynomial near an EP. Section <ref> proofs the general result based on the resolvent (Green's function) of the system, while Sec. <ref> illuminates this further by an explicit approach to the eigenstate geometry. Section <ref> then turns to the equipartition principle. Section <ref> provides an illustrative example about the interplay of multiple EPs, and Sec. <ref> provides the conclusions.
Throughout the main text, I will present the results in the physical context of effective Hamiltonians, where the eigenvalues determine resonance energies or frequencies, and
use the symbol ω_i to refer to a specific eigenvalue, while ω represents a continuous energy or frequency variable.
For convenience, Appendix <ref> recapitulates the results in a more generic mathematical notation.
§ GOAL AND MAIN RESULT
Consider a system close to an EP of order n. Truncated to the quasi-degenerate subspace, we can write
the effective Hamiltonian (or similar object of interest) as
H=ω_EP +N+ε H'
where ω_EP is the eigenvalue at the EP, and ε the parameter that controls the detuning, which we assume to be small.
The condition that ε=0 is the desired EP of order n implies that the residual term N obeys
N^n-1≠ 0, N^n=0.
Where convenient, we can resort
to Schur's unitary triangularization theorem to express this residual term in upper-triangular form, for which we then explicitly use the symbol T.
For this truncated setting, reference <cit.> derives the compact asymptotic expression
|r_i|∼|n(ω_i-ω_EP)^n-1/ξ|
for the phase rigidity
of the state with eigenvalue ω_i, right eigenvector |R_i⟩, and left eigenvector ⟨ L_i|, where
ξ=||N^n-1||_2
(we use ∼ to denote the leading-order asymptotic result for ε→ 0, including coefficients). Involving just the perturbed eigenvalue and a single characteristic number that can be evaluated directly at the EP, this relation beautifully accentuates the spectral significance of this quantity.
On the other hand, the quantity ξ is only well defined in the truncated space.
The main goal of this work is to put this result into a general and universal form so that it also applies in presence of additional states that do not participate in the EP. The final result does not require truncation of the Hamiltonian to the quasi-degenerate subspace, allowing us to apply it to general scenarios, and the derivation usefully illuminates general geometric relations of the involved states.
To achieve this goal, I proof the exact reformulation
r_i=p'(ω_i)/A_R_iL_i(ω_i-H)
of the phase rigidity,
where
A_v w(X)=∑_kl(-1)^klv̂^*_kp_lk(X)ŵ_l
determines a matrix element of the adjugate matrix obtained from the minors p_lk(X), the hat indicates normalization to unit vectors, and
p(ω)=det (ω-H)
is the characteristic polynomial, where the prime denotes the derivative with respect to ω.
Besides its exactness, Eq. (<ref>) has two key merits. The first is its simple asymptotic behaviour even close to an EP, where limits can be taken directly.
This then expresses the result (<ref>) as
|r_i|∼|p'(ω_i)|/|p_L_EPR_EP(N)|,
where p_L_EPR_EP(N) denotes the minor of N obtained by removing the eigenvector directions ⟨ L_EP| and |R_EP⟩ at the EP from the row and column, which by definition are orthogonal, ⟨ L_EP| R_EP⟩=0.
For this truncated Hamiltonian, the two results agree by the following relations: As I show in the next section,
p'(ω_i)∼ n (ω_i-ω_EP)^n-1.
Furthermore,
ξ= |p_L_EPR_EP(N)|
follows using the normal form where N takes an upper triangular form T, as obtained by a Schur decomposition, resulting in a basis change after which |R_EP⟩ is the first basis vector and ⟨ L_EP| is the last.
Then we can verify explicitly that
|p_n1(T)|=|∏_k=1^n-1 T_k,k+1|=||T^n-1||_2=ξ
takes exactly the same algebraic form as obtained from its definition above; and this then remains true in any orthonormal basis.
Equation (<ref>) asks us to take the ratio of
Eqs. (<ref>) and (<ref>), and this then indeed recovers (<ref>).
The second key merit of Eq. (<ref>) is its universality.
Its derivation does not rely on a truncation of the matrix to the quasi-degenerate space, which is required to define ξ in the original context. Therefore, we can replace N by the full Hamiltonian at the EP, including possible additional states with non-degenerate eigenvalues. This gives our general result for the asymptotic behaviour near such an EP,
|r_i|∼|p'(ω_i)|/|p_L_EPR_EP(ω_EP-H_EP)|,
where the minor can be calculated directly in an orthonormal basis where |R_EP⟩ and |L_EP⟩ are basis vectors, utilizing again that such a basis exists since at the EP ⟨ L_EP |R_EP⟩=0.
For example, consider a 3×3 Hamiltonian at an EP of order 2, written as
H=(
[ 0 a b; 0 0 c; 0 0 d; ]).
Then we correctly find the asymptotics
|r_i|∼|2ω_i/a||d|/√(|c|^2+|d|^2)
for each of the two eigenvalues that approach the EP,
where the last factor encodes the contribution from the overlap of the additional eigenstate with the quasi-degenerate subspace of the states near the EP.
The exact expression (<ref>) and its asymptotic form (<ref>) near EPs are the main technical results of this work.
In the following sections provide the derivations, and illuminate further consequences, such as the equipartition
principle in the quasi-degenerate subspace mentioned in the introduction, and an example that highlights the interplay of multiple EPs.
§ CHARACTERISTIC POLYNOMIAL
Let us briefly proof the result for p'(ω_i) in the asymptotics of the truncated Hamiltonian stated in Eq. (<ref>).
This concerns the characteristic polynomial
p(ω)=
det (ω-H)=∑_k=0^n ω^k a_n-k,
which determines the eigenvalues as roots of the secular equation
p(ω_i)=0.
To simplify the derivation, I set ω_EP=0, and reinstate it to a finite value at the end of this section.
We can use Newton's formulas <cit.> to express the coefficients a_k by the traces t_k=tr H^k, which here all are of order t_k=O(ϵ) (the EP condition implies tr N^k=0 for all k≥ 1.)
With our choice ω_EP=0, we then find that with the exception of a_0=1, |a_k|∼ |t_k/k| is of the same order, and by comparing orders we only need to consider
p(ω)∼ω^n +(-1)^n det(H).
Therefore, close to the EP the eigenvalues approximately spread out to take uniformly spaced positions on a circle, which recovers the well-know result of the eigenvalue cloud close to the EP <cit.>. In the present context, Eq. (<ref>)
allows us to immediately confirm p'(ω_i)∼ nω_i^n-1,
which turns into Eq. (<ref>) when we reinstate ω_EP to a finite value.
In the presence of additional states, we analogously infer
p'(ω_i)∼ n (ω_i-ω_EP)^n-1∏_k'(ω_k-ω_EP),
where the product is over the remaining eigenvalues not participating in the EP (using their algebraic multiplicities in case that they themselves participate in different EPs).
With these preparations, we can now turn to proof the main results (<ref>) and (<ref>) of this work.
§ RESOLVENT PROOF
We now derive the general results (<ref>) and (<ref>).
The following is inspired by physical contexts involving the Petermann factor K_i=1/|r_i|^2. This typically appears from the resolvent
G=(ω -H)^-1,
which we can analyze for ω→ω_i.
Using the spectral decomposition, we have
G_kl∼R_i,kL_i,l^*/⟨ L_i|R_i⟩1/ω-ω_i,
where ∼ now also implies the asymptotics ω→ω_i.
On the other hand, using Cramer's rule,
G_kl∼(-1)^k+kp_kl(ω_i-H)/p'(ω_i)1/ω-ω_i,
where p_kl(X) is the minor of X obtained by removing the indicated row-column pair, and the prime in p'(ω_i) denotes the derivative of p(ω) with respect to ω.
This gives the important exact relation (see remark 5 in Ref. <cit.>)
A_kl≡ (-1)^k+l p_lk(ω_i-H)=R_i,kL_i,l^*/⟨ L_i|R_i⟩p'(ω_i).
We now temporarily use the convention ⟨ R_i|R_i⟩=1=⟨ L_i|L_i⟩
to write
⟨ R_i|A| L_i⟩ = p'(ω_i)/⟨ L_i|R_i⟩.
Given our normalization, r_i=⟨ L_i|R_i⟩, so that we immediately obtain the exact result Eq. (<ref>), which asymptotically becomes Eq. (<ref>) for the truncated case, and Eq. (<ref>) for the general case.
Furthermore, if desired, these asymptotic expressions can be made even more specific by using Eq. (<ref>).
§ CONSTRUCTIVE PROOF
The derivation above is closely connected to considerations in the eigenvectors-from-eigenvalues school of thought <cit.>.
Following the essence of these considerations further, we express the components of the non-normalized
eigenvectors freely and exactly as
R_i,k =(-1)^k p_sk(ω_i-H),
L_i,k^* =(-1)^k p_kt(ω_i-H).
Away from an EP, we can choose s and t arbitrarily, while at an EP these expressions apply to all choices of s and t that give a finite result, of which there is at least one.
The proof of this representation is simple—for any square matrix with det M=0, M adj M=0, so that each column of adj M provides a solution to the homogenous system of equations M 𝐯=0. We here simply apply this to the matrix M=ω_i-H, and obtain an eigenvector as long as the result does not vanish.
For instance, using the triangular normal form T of the truncated system and setting for convenience ω_EP=0, we can then recover the result (<ref>) fully constructively by setting s=1, t=n, which gives
R_i,k ∼ω_i^n-k∏_l=1^k-1T_l,l+1,
L_i,k^* ∼ω_i^k-1∏_l=k^n-1T_l,l+1,
so that ⟨ R_i|R_i⟩∼⟨ L_i|L_i⟩∼ξ^2,
|⟨ L_i|R_i⟩|∼ n|ω_i^n-1|ξ,
|r_i|∼ |nω_i^n-1|ξ/ξ^2.
We note that the general result (<ref>) is significantly more compact than using Eq. (<ref>), but the constructive expressions can have merit in cases where they identify a suitable low-dimensional subspace.
§ EQUIPARTITION PRINCIPLE
The intermediate steps of the original derivation of Eq. (<ref>)
in Ref. <cit.> involve a number of interesting relations for the truncated system, most prominently
for the overlaps
⟨ L_i|R_i ⟩∼ n⟨ L_EP|R_i ⟩,
where the subscript EP denotes quantities at the EP.
We did not use this relation, but note that it can be recovered from Eq. (<ref>). For this, we simply take
⟨ L_EP |A| L_EP⟩ ∼⟨ L_EP|R_i⟩/⟨ L_i|R_i⟩ p'(ω_i)
= p_L_EPL_EP((ω_i-ω_EP)-N),
Now, because of the reduced matrix size in the minors we find for the diagonal terms
p_L_EPL_EP((ω_i-ω_EP)-N)∼ (ω_i-ω_EP)^n-1∼ p'(ω_i)/n. The relation (<ref>) for overlaps in the truncated system then follows directly.
Using our explicit expressions for the states, we can now concretize this relation to obtain geometric insights into the states near the EP. First, using the upper-triangular normal form T of the still truncated system,
p_kk((ω_i-ω_EP)-T)= (ω_i-ω_EP)^n-1 is identical for all k, and
Eq. (<ref>) gives
R_i,kL_i,k^*=1/n⟨ L_i|R_i⟩,
so that every individual term contributes exactly equally to this overlap.
Next, using the generality of expression (<ref>), we find that this still applies to the non-truncated case, as the further overlaps are all of a higher order.
Therefore, in the normal-form basis, each direction in the quasi-degenerate space provides exactly the same contribution to the phase rigidity, as illustrated in Fig. <ref>. This equipartition principle provides a striking geometric reinterpretation of the factor n appearing in the asymptotic result (<ref>), which still carries over to the general case, and severely restricts the eigenstate geometry near an EP.
§ INTERPLAY OF EXCEPTIONAL POINTS
To illustrate the utility of the general result (<ref>), let us
consider a system of four states, which pairwise participate in two EPs.
For concreteness, we utilize Schur's unitary triangularization theorem to study this system in its upper-triangular form,
H=(
[ 0 a_1 a_2 a_3; 0 0 b_1 b_2; 0 0 Ω c_1; 0 0 0 Ω; ]),
where one EP has a vanishing eigenvalue ω_1=ω_2≡ω_(1|2)=0, while the other has the eigenvalue ω_3=ω_4≡ω_(3|4)=Ω.
Because of the defectiveness, each of these has only one right-left eigenvector pair,
|R_(1|2)⟩=(
[ 1; 0; 0; 0; ]), ⟨ L_(1|2)|=(0,Ω,-b_1,(b_1c_1-b_2Ω)/Ω),
|R_(3|4)⟩=(
[ (a_1b_1+a_2Ω)/Ω; b_1; Ω; 0; ]), ⟨ L_(3|4)|=(0,0,0,1).
The corresponding adjugate matrices are
adj (-H) =(
[ 0 a_1Ω^2 -a_1 b_1 Ω a_1(b_1c_1- b_2Ω); 0 0 0 0; 0 0 0 0; 0 0 0 0; ])
,
adj (Ω-H) =(
[ 0 0 0 (a_1b_1+a_2Ω)c_1; 0 0 0 Ω b_1 c_1; 0 0 0 Ω^2 c_1; 0 0 0 0; ])
.
Therefore, while at the EP the phase rigidity r_i of the participating eigenvalues vanishes,
the coefficients
A_R_(1|2)L_(1|2) = a_1 Ω√(|Ω^2|+|b_1|^2+|b_1c_1- b_2Ω|^2/|Ω|^2),
A_R_(3|4) L_(3|4) =c_1Ω√(|Ω^2|+|b_1|^2+|a_1b_1+ a_2Ω|^2/|Ω|^2)
are both finite.
Adding a small perturbation ε H', the perturbative eigenvalues can be compactly expressed as
ω_1,2 ∼±√(εtr [H'A(-H)])/Ω≡±δ_(1|2) ,
ω_3,4 ∼Ω±√(εtr [H'A(Ω-H)])/Ω≡Ω±δ_(3|4).
Combining all these result in our asymptotic expression (<ref>),
we therefore obtain
|r_1,2|∼2|δ_(1|2)|/|a_1|1/√(1+|b_1|^2/|Ω^2|+|b_1c_1- b_2Ω|^2/|Ω|^4)
and
|r_3,4|∼2|δ_(3|4)|/|c_1|1/√(1+|b_1|^2/|Ω^2|+|a_1b_1+ a_2Ω|^2/|Ω|^4).
In both cases, this contains a factor as in Eq. (<ref>), representing the truncation of the system to the 2-dimensional quasi-degenerate subspace, and a contribution characterizing the overlap with the remaining states in the system.
Furthermore, the eigenvalue perturbations (<ref>) themselves pick up contributions from outside the truncated subspace, and both effects do not cancel.
Therefore, such truncations have to be treated with caution. In practice, some truncation will always have to be applied, but our general expression (<ref>) and the asymptotic form (<ref>) can then be utilized to these without any further restrictions.
In particular, within this formalism we do not encounter any further complications from the fact that the remaining states are themselves close to an EP. This illustrates the universal applicability of the derived algebraic expressions.
§ CONCLUSIONS
In summary, I provide an algebraic expression for the phase rigidity (or equivalently, the eigenvalue condition number and the Petermann factor), given by Eq. (<ref>). This expression has three key merits—it is exact, it is well behaved near exceptional points where it provides the asymptotic form Eq. (<ref>), and it does not require truncation of the system to a quasi-degenerate subspace.
The expression therefore enjoys a wide range of applicability, including to systems with additional overlapping states that possibly participate in their own exceptional points.
This then enables the careful and precise analysis of effectively non-Hermitian systems, which is essential for the evaluation of their intriguingly modified sensitivity to static and dynamic perturbations. In particular, the results demonstrate that common truncations to quasi-degenerate subspaces have to be treated with caution. These are only admissible if other states in the system do not overlap with these spaces, which can only be asserted using additional arguments, such as from causality constraints in passive systems <cit.>. This realization should motivate the design of systems where new functionality arises from the combination of several separate exceptional points.
A particular interesting setting to explore such new enhancement effects are active systems, in which inhomogenous gain significantly changes the mode-nonorthogonality, while causality constraints no longer apply.
I gratefully thank Jan Wiersig for informing me about his work <cit.>, as well as for insightful discussions, and Carlo Beenakker for sharing with me the insights of Ref. <cit.> during an inspirational visit to Leiden.
The author acknowledges funding by EPSRC via Program Grant No. EP/N031776/1.
§ MATHEMATICAL FORMULATION
The main text places the discussion into the physical context of non-Hermitian effective Hamiltonians.
In more general contexts and mathematical language, we are concerned with a nonnormal complex square matrix M of dimension m.
For a given eigenvalue λ_i, the right eigenvectors 𝐯_i and left eigenvectors 𝐰_i fulfill
M𝐯_i=λ_i 𝐯_i, 𝐰_i M=λ_i 𝐰_i,
and the condition number (<ref>) is given by
r_i=𝐰_i𝐯_i/|𝐰_i||𝐯_i|,
where the bars denote the ℓ^2 norm.
Our main result (<ref>) reexpresses this as
r_i=p'(λ_i)|𝐰_i||𝐯_i|/𝐯_i^†adj(λ_i-M)𝐰_i^†,
where p(λ)=(λ-M) is the characteristic polynomial, and the prime denotes the derivative with respect to the argument.
The key merit is that the second factor on the right hand side of Eq. (<ref>)
remains finite even at EPs, the generic degeneracies of nonnormal matrices where the algebraic multiplicity of λ exceeds the geometric multiplicity.
Therefore, the behaviour of r near these points becomes simplified to asymptotic considerations of the characteristic polynomial.
We only consider the generic case where λ_EP is an eigenvalue of algebraic multiplicity n and geometric multiplicity 1, so that we can only find a single right eigenvector and a single left eigenvector.
At such an EP, these vectors fulfill the self-orthogonality condition 𝐰_EP𝐯_EP=0, hence r_EP=0.
Near the EP, we generically have n almost-degenerate eigenvalues λ_i, 1≤ i≤ n, which each have a well-defined right-left eigenvector pair, as well as any additional eigenvalues not participating in the EP, which we denote as λ_k with n<k≤ m (we admit these to display separate EPs themselves, and then count them by their algebraic multiplicity).
For each of the quasi-degenerate eigenvalues, we then have the
compact asymptotic behaviour
p'(λ_i)∼ n (λ_i-λ_EP)^n-1∏_k=n+1^m (λ_k-λ_EP).
We note that the term n (λ_i-λ_EP)^n-1 only involves a single specific eigenvalue out of the n quasi-degenerate ones. This can be interpreted as a product rule,
∏_j=1;j≠ i^n (λ_i-λ_j)∼ n (λ_i-λ_EP)^n-1,
which holds because asymptotically the eigenvalues spread out equidistantly on a circle around λ_EP, as given e.g. in the classic work <cit.>.
More complicated scenarios arise if the geometric multiplicity is larger than 1, or when we perturb the system so that some eigenvalues remain degenerate. However, in all these cases, we can start the analysis from the expression (<ref>).
27
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Moiseyev(2011)]moiseyev2011non
author author N. Moiseyev, @noop title Non-Hermitian
quantum mechanics (publisher Cambridge University Press, year 2011)NoStop
[Ashida et al.(2020)Ashida,
Gong, and Ueda]ashida2020
author author Y. Ashida, author Z. Gong, and author M. Ueda, title title Non-Hermitian physics, https://doi.org/10.1080/00018732.2021.1876991 journal
journal Adv. Phys. volume 69, pages 249 (year 2020)NoStop
[Kato(2013)]kato2013perturbation
author author T. Kato, @noop title Perturbation theory for
linear operators (publisher Springer Science & Business
Media, year 2013)NoStop
[Heiss(2000)]Heiss2000
author author W. D. Heiss, title title Repulsion of resonance
states and exceptional points, https://doi.org/10.1103/PhysRevE.61.929 journal journal Phys. Rev. E volume 61, pages 929 (year 2000)NoStop
[Berry(2004)]Berry2004
author author M. V. Berry, title title Physics of nonhermitian
degeneracies, https://doi.org/10.1023/B:CJOP.0000044002.05657.04
journal journal Czech. J. Physics volume 54, pages 1039 (year
2004)NoStop
[Heiss(2004)]Heiss2004
author author W. D. Heiss, title title Exceptional points of
non-Hermitian operators, https://doi.org/10.1088/0305-4470/37/6/034 journal journal J. Phys. A volume 37, pages
2455 (year 2004)NoStop
[Wiersig(2014)]Wie14
author author J. Wiersig, title title Enhancing the sensitivity
of frequency and energy splitting detection by using exceptional points:
Application to microcavity sensors for single-particle detection, https://doi.org/10.1103/PhysRevLett.112.203901 journal
journal Phys. Rev. Lett. volume 112, pages 203901 (year 2014)NoStop
[Wiersig(2020)]Wiersig20
author author J. Wiersig, title title Review of exceptional
point-based sensors, https://doi.org/10.1364/PRJ.396115 journal journal Photon. Res. volume
8, pages 1457 (year 2020)NoStop
[Schomerus(2020)]Sch20
author author H. Schomerus, title title Nonreciprocal response
theory of non-Hermitian mechanical metamaterials: Response phase transition
from the skin effect of zero modes, https://doi.org/10.1103/PhysRevResearch.2.013058 journal
journal Phys. Rev. Research volume
2, pages 013058 (year 2020)NoStop
[Hashemi et al.(2022)Hashemi, Busch, Christodoulides,
Ozdemir, and El-Ganainy]Hashemi2022
author author A. Hashemi, author K. Busch,
author D. N. Christodoulides,
author S. K. Ozdemir, and author R. El-Ganainy, title title Linear response theory of open systems with
exceptional points, https://doi.org/10.1038/s41467-022-30715-8
journal journal Nat. Commun. volume 13, pages 3281 (year
2022)NoStop
[Budich and Bergholtz(2020)]Budich2020
author author J. C. Budich and author E. J. Bergholtz, title title Non-Hermitian
topological sensors, https://doi.org/10.1103/PhysRevLett.125.180403 journal
journal Phys. Rev. Lett. volume 125, pages 180403 (year 2020)NoStop
[van Langen et al.(1997)van
Langen, Brouwer, and Beenakker]vanLangen1997
author author S. A. van Langen, author P. W. Brouwer, and author C. W. J. Beenakker, title title Fluctuating phase
rigidity for a quantum chaotic system with partially broken time-reversal
symmetry, https://doi.org/10.1103/PhysRevE.55.R1 journal journal Phys. Rev. E volume
55, pages R1 (year 1997)NoStop
[Petermann(1979)]Petermann79
author author K. Petermann, title title Calculated spontaneous
emission factor for double-heterostructure injection lasers with gain-induced
waveguiding, https://doi.org/10.1109/JQE.1979.1070064 journal journal IEEE J. Quantum Electron. volume 15, pages 566 (year
1979)NoStop
[Siegman(1989)]Siegman89
author author A. E. Siegman, title title Excess spontaneous
emission in non-Hermitian optical systems. I. Laser amplifiers, https://doi.org/10.1103/PhysRevA.39.1253 journal
journal Phys. Rev. A volume 39, pages 1253 (year 1989)NoStop
[Patra et al.(2000)Patra,
Schomerus, and Beenakker]Patra2000
author author M. Patra, author H. Schomerus, and author C. W. J. Beenakker, title title Quantum-limited linewidth of a chaotic
laser cavity, https://doi.org/10.1103/PhysRevA.61.023810
journal journal Phys. Rev. A volume 61, pages 023810 (year
2000)NoStop
[Yoo et al.(2011)Yoo,
Sim, and Schomerus]Yoo11
author author G. Yoo, author H.-S. Sim, and author H. Schomerus, title title Quantum noise and mode nonorthogonality in
non-Hermitian PT-symmetric optical resonators, https://doi.org/10.1103/PhysRevA.84.063833 journal journal Phys. Rev. A volume 84, pages 063833 (year 2011)NoStop
[Takata et al.(2021)Takata,
Nozaki, Kuramochi, Matsuo,
Takeda, Fujii, Kita,
Shinya, and Notomi]Takata21
author author K. Takata, author K. Nozaki,
author E. Kuramochi, author S. Matsuo, author
K. Takeda, author T. Fujii, author S. Kita, author A. Shinya, and author M. Notomi, title title Observing exceptional
point degeneracy of radiation with electrically pumped photonic crystal
coupled-nanocavity lasers, https://doi.org/10.1364/OPTICA.412596
journal journal Optica volume 8, pages 184 (year 2021)NoStop
[Doppler et al.(2016)Doppler, Mailybaev, Böhm, Kuhl, Girschik, Libisch, Milburn, Rabl, Moiseyev, and Rotter]Doppler2016
author author J. Doppler, author A. A. Mailybaev, author J. Böhm,
author U. Kuhl, author
A. Girschik, author
F. Libisch, author T. J. Milburn, author P. Rabl, author N. Moiseyev, and author S. Rotter, title title Dynamically encircling an
exceptional point for asymmetric mode switching, https://doi.org/10.1038/nature18605 journal journal Nature volume 537, pages
76 (year 2016)NoStop
[Bergholtz et al.(2021)Bergholtz, Budich, and Kunst]Ber19
author author E. J. Bergholtz, author J. C. Budich, and author F. K. Kunst, title title Exceptional topology of
non-Hermitian systems, https://doi.org/10.1103/RevModPhys.93.015005 journal
journal Rev. Mod. Phys. volume 93, pages 015005 (year 2021)NoStop
[Kawabata et al.(2019)Kawabata, Shiozaki, Ueda, and Sato]Kaw19
author author K. Kawabata, author K. Shiozaki,
author M. Ueda, and author M. Sato, title
title Symmetry and topology in non-Hermitian physics, https://doi.org/10.1103/PhysRevX.9.041015 journal journal Phys. Rev. X volume 9, pages
041015 (year 2019)NoStop
[Trefethen(2005)]trefethen2005spectra
author author L. N. Trefethen, title title Spectra and
pseudospectra: The behaviour of non-normal matrices and operators, in @noop booktitle The graduate student’s guide to
numerical analysis’ 98: Lecture notes from the VIII EPSRC Summer School in
Numerical Analysis (publisher Springer, year
2005) pp. pages 217–250NoStop
[Denton et al.(2022)Denton,
Parke, Tao, and Zhang]denton2022eigenvectors
author author P. Denton, author S. Parke,
author T. Tao, and author X. Zhang, title
title Eigenvectors from eigenvalues: A survey of a basic
identity in linear algebra, @noop journal journal Bull. Am. Math. Soc. volume 59, pages 31 (year 2022)NoStop
[Wiersig(2023)]Wiersig2023
author author J. Wiersig, https://doi.org/10.48550/arXiv.2304.00764 title Petermann factors and phase rigidities near exceptional points
(year 2023), note to appear in Phys. Rev. Res., https://arxiv.org/abs/2304.00764 arXiv:2304.00764 [quant-ph]
NoStop
[Haake et al.(1996)Haake,
Kus, Sommers, Schomerus, and Zyczkowski]Haake1996
author author F. Haake, author M. Kus, author H.-J. Sommers, author
H. Schomerus, and author
K. Zyczkowski, title
title Secular determinants of random unitary matrices, https://doi.org/10.1088/0305-4470/29/13/029 journal journal J. Phys. A volume 29, pages
3641 (year 1996)NoStop
[Wiersig(2019)]Wiersig2019
author author J. Wiersig, title title Nonorthogonality
constraints in open quantum and wave systems, https://doi.org/10.1103/PhysRevResearch.1.033182 journal
journal Phys. Rev. Res. volume 1, pages 033182 (year 2019)NoStop
[Schomerus(2022)]Schomerus2022
author author H. Schomerus, title title Fundamental constraints
on the observability of non-Hermitian effects in passive systems, https://doi.org/10.1103/PhysRevA.106.063509 journal journal Phys. Rev. A volume 106, pages 063509 (year 2022)NoStop
[Wiersig(2022)]Wiersig2022
author author J. Wiersig, title title Distance between
exceptional points and diabolic points and its implication for the response
strength of non-Hermitian systems, https://doi.org/10.1103/PhysRevResearch.4.033179 journal
journal Phys. Rev. Res. volume 4, pages 033179 (year 2022)NoStop
|
http://arxiv.org/abs/2307.04589v1 | 20230710143040 | Harnessing the Power of Swarm Satellite Networks with Wideband Distributed Beamforming | [
"Juan Carlos Merlano Duncan",
"Vu Nguyen Ha",
"Jevgenij Krivochiza",
"Rakesh Palisetty",
"Geoffrey Eappen",
"Juan Andres Vasquez",
"Wallace Alves Martins",
"Symeon Chatzinotas",
"Björn Ottersten"
] | eess.SP | [
"eess.SP"
] | |
http://arxiv.org/abs/2307.06044v1 | 20230712094536 | Generating arbitrary non-separable states with polarization and orbital angular momentum of light | [
"Sarika Mishra",
"Ali Anwar",
"R. P. Singh"
] | quant-ph | [
"quant-ph"
] |
[email protected]
Quantum Science and Technology Laboratory, Physical Research Laboratory, Ahmedabad, India 380009.
Indian Institute of Technology, Gandhinagar, India 382355.
Fraunhofer Centre for Applied Photonics, 99 George Street, Glasgow, Scotland, United Kingdom.
[email protected]
Quantum Science and Technology Laboratory, Physical Research Laboratory, Ahmedabad, India 380009.
We demonstrate an experimental method to generate arbitrary non-separable states of light using polarization and orbital angular momentum (OAM) degrees of freedom. We observe the intensity distribution corresponding to OAM modes of the light beam by projecting the non-separable state into different polarization states. We further verify the presence of non-separability by measuring the degree of polarization and linear entropy. This classical non-separability can be easily transferred to the quantum domain using spontaneous parametric down-conversion for applications in quantum communication and quantum sensing.
Generating arbitrary non-separable states with polarization and orbital angular momentum of light
R. P. Singh
August 12, 2023
=================================================================================================
§ INTRODUCTION
In general, quantum entanglement <cit.> utilizes the property of quantum correlation between two particles of
similar nature or similar degree of freedom (DoF). However, entanglement can also be established between two different DoFs <cit.>, where one DoF of the system can not be measured without affecting the other. This shows the non-separable nature of entangled states, which means that one state of the system can not be separated from the other.Importantly, non-separability can be explained in both domains, classical and in quantum <cit.>. In quantum optics, entanglement between multiple photons with similar or different DoFs has been demonstrated, where the measurement of a DoF on one photon affects the states of others <cit.>. A local entanglement between multiple intrinsic DoFs of a single photon <cit.> is used to describe the non-separability in classical light that is controversially termed as “classical entanglement” <cit.> due to its violation of Bell-like inequality <cit.>. Typical examples are spin-orbit vector beams, non-separable in the spatial and polarization DoFs <cit.>. Some classes of vector beam such as Poincare beams and vector vortex beams <cit.> show the non-separable nature between polarization and orbital angular momentum (OAM) of the beam. These classical non-separable states (CNSS) of light are mathematically
analogous to quantum entangled states <cit.>. This existing parallelism
between classical and quantum non-separable states has generated
great interest for their applications in quantum communication <cit.>, polarization metrology <cit.>, quantum imaging <cit.>, quantum computing gates <cit.> etc.
Employing more than one degree of freedom of light allows one to fetch more information through a single photon. Since OAM is related to the spatial mode and forms an infinite dimensional Hilbert space <cit.>, this kind of multidimensionality offers a realization of d-dimensional qudits that increases the channel capacity in quantum communication <cit.>. Due to the high channel capacity, the exploitation of high dimensional quantum state using of spatial modes has received a lot of recent interest in quantum communication field <cit.>. In recent years, a classical non-separable light using polarization and OAM (vector vortex beams) also attracted a lot of interest due to its direct transformation into a quantum hybrid entangled state <cit.> which can be further utilized in quantum cryptography protocols <cit.>.
Generation and characterization of classical non-separable states of polarization and OAM have already been studied <cit.>. Various interferometric methods are used to generate vector vortex beams representing non-separable states of light with polarization and OAM, such as Michelson <cit.>, folded Mach-Zehnder <cit.>, beam displaced <cit.> and Saganac <cit.> interferometers. Interferometric methods generally have inherent mode stability issues and require a fine-tuned alignment. Moreover, Sagnac interferometric method can only generate a non-separable state with a fixed OAM value with a spiral phase plate chosen. To overcome the limitations of interferometric methods, some all-in-line setups with q-plates <cit.> and spatial light modulators <cit.> have been implemented. In this article, we propose an experimental method to generate an arbitrary non-separable state using spiral phase plate (SPP) <cit.> and spatial light modulator (SLM), <cit.> which does not require any interferometric setup and fine-tuning of alignment. The proposed method is similar to Ref. <cit.> where another SLM is used instead of the SPP. However, the scope of application of such non-separable states in the quantum domain is not explored in these works. Here, we quantify the non-separability of the states generated by this method, which clearly reflects the use of such a method in quantum experiments related to OAM <cit.>. To quantify the non-separability, we project the output state to different polarization and record the corresponding intensity distributions. Further, we verified the non-separability by measuring the degree of polarization and linear entropy <cit.>.This article is organized as follows: In section <ref>, we develop the theory of our proposed idea. We first discuss the old method to generate classical non-separable state, and then we compare it with our proposed setup for arbitrary classical non-separable states of light using polarization and OAM. In section <ref> and <ref>, we explain all the details of our experiment and results demonstrating the creation and detection of these states. Finally, in section <ref>, we conclude the article indicating its potential applications.
§ THEORY
Generally, a classical non-separable state in polarization and OAM DoF is generated using a polarizing Sagnac interferometer <cit.> as shown in Figure <ref>.
In this experimental setup, a diagonally polarized Gaussian beam is fed into the Sagnac interferometer where a polarizing beam splitter (PBS) splits the horizontal and vertical polarization in two different directions, clockwise and counter-clockwise. Both the orthogonally polarized Gaussian beams counter propagate. An SPP of order m converts the Gaussian beam to a vortex beam of order m and -m for horizontal and vertical polarized light respectively. Both the beams combine at the same PBS to form the non-separable state,
|Ψ⟩_ns=1/√(2)( |H⟩|m⟩+|V⟩|-m⟩).
At the output of the Sagnac interferometer, a quarter-wave plate (QWP), a half-wave plate (HWP), and a PBS can be used for the measurement of the non-separable state. Projection to a particular polarization state gives the information about spatial mode (OAM) corresponding to that polarization.The main drawback of the Sagnac interferometry based non-separable state is that it produces only a particular type of state as mentioned in Eqn <ref>, and it also requires a perfect alignment. To overcome this problem and to generate any arbitrary classical non-separable state, we propose an easier method.
In this method, if the input beam is a diagonally polarized Gaussian beam, then by using SPP and SLM in a row as shown in Figure <ref> one can generate different non-separable states by varying the OAM values on the SLM. The main advantage of parallel aligned LC-SLM is that the orthogonal polarization state is unaffected by the SLM <cit.>. In our case, the liquid crystal (LC) molecules inside the SLM are aligned in a horizontal direction, so it does not affect the vertically polarized light, which means that the spatial mode of vertically polarized beam will be unchanged after passing through the SLM. Since SLM is not adding any OAM value to vertical polarization, therefore in the final state both horizontal and vertical polarized beams will have different OAM values. The final state can be written as,
|Ψ⟩_ns=1/√(2)( |H⟩|m_SPP+m_SLM⟩+ |V⟩|m_SPP⟩).
By changing the value m_SLM via SLM one can generate the non-separable state of any order. This method is easy to use as compared to the Sagnac interferometry which requires a perfect alignment. Although the schematic shown in Figure <ref> generates a class of NS states with a fixed OAM associated with the vertically polarized component of the beam, An SPP of different order can be used to change the OAM values associated with V polarized component. To generate a completely arbitrary NS state, the SPP in Figure <ref> should be replaced with another SLM along with one HWP rotated at 45^∘. In this case, if the input beam is a diagonally polarized Gaussian beam then the combination of SLM and HWP (at 45^∘) can easily add any OAM value to V polarized component only, and the other SLM which is already shown in figure <ref> will only change OAM of H polarized component. Hence, an arbitrary non-separable state can be generated using such a configuration.
§ EXPERIMENTAL SETUP
The experimental setup for the generation of a classical non-separable state is shown in Figure <ref>.
Toptica TopMode (405 nm) laser is used to perform the experiment. The horizontally polarized light with Gaussian mode (|H⟩|0⟩) is converted into the diagonal polarization (|D⟩|0⟩) after passing through the HWP which is oriented at 22.5^∘ with respect to the fast axis. An SPP of order m is kept in the path of the light beam. After passing through the SPP, the state of the light will be,
|Ψ⟩ = |D⟩|m_SPP⟩
= |H⟩|m_SPP⟩+|V⟩|m_SPP⟩,
The output of the SPP is then imaged onto the SLM. We imprinted a computer-generated hologram into the SLM to generate the vortex beam of order m_SLM. Since the liquid crystal molecules inside the SLM are aligned in a horizontal direction, it does not affect the vertical polarization. The SLM will only change the spatial profile of horizontally polarized light. The order of the spatial mode can be easily controlled by the SLM through a computer. The final state of the light beam after reflecting back from the SLM is written as,
|Ψ⟩=|H⟩|m_SPP+m_SLM⟩+e^iϕ|V⟩|m_SPP⟩.
where ϕ is the relative phase delay between H and V polarized lights due to the birefringent walk-off inside the SLM.
§ RESULTS AND DISCUSSION
In this experiment, we used SPP of order |m_SPP|=2. We generated various non-separable states by changing the order of LG mode, m_SLM, with the help of SLM. For the detection of the state, a combination of QWP, HWP, and PBS is used. The combination of HWP and PBS acts as a polarizer. We measure the spatial distribution of light beam by projecting it to different polarizations such as linear polarization (H, V, D, and A) and circular polarization (R and L). The intensity distribution for various non-separable states is given in Figures <ref> and <ref>. The theoretical simulated intensity distribution for various non-separable states are shown in Figures <ref> and <ref>. Our experimental results shown in Figures <ref> and <ref> are in good agreement with the theoretical ones shown in Figures <ref> and <ref> respectively. If we compare the experimental results with the theoretical simulation, it is very evident that SLM does not affect the spatial profile of vertical polarization. However, it only adds a relative phase delay of π/2 between H and V polarization due to the birefringent walkoff inside the SLM.
We also measured the Stokes parameters to calculate the degree of polarization. It is defined as,
S_0 =I_H+I_V,S_1=I_D-I_A
S_2 =I_L-I_R , S_3=I_H-I_V,
where I_x is the intensity of x-polarized light beam. The degree of polarization (DOP) can be written in terms of Stokes parameter,
DOP=√(S_1^2+S_2^2+S_3^2),
The DOP ranges from 0, corresponding to the completely mixed polarized state (unpolarized light), to 1 for the completely polarized state. To characterize the non-separability of the state, linear entropy (S_L) can also be calculated. It can be represented in terms of DOP <cit.>,
S_L= 1-DOP^2.
The linear entropy measures the amount of mixedness present in the state. For a maximally entangled/non-separable system, the individual subsystem will always be in a mixed state. The maximum amount of mixedness present in the subsystem leads to the maximum non-separability of the system. Thus, one can measure the degree of non-separability by measuring the linear entropy S_L of the subsystem. The linear entropy S_L can range from 0, corresponding to the product state, and 1, corresponding to the maximally non-separable state.The values of DOP and S_L are given in Table <ref> for separable and non-separable states.
Without SPP and m_SLM=0 (Eqn. <ref>), the light beam is just a superposition of two orthogonal polarizations with Gaussian mode, which results in a completely polarized state (separable state). That is why, DOP is maximum (0.94) without SPP and the linear entropy is S_L=0.12, which represents the product state of polarization and Gaussian mode.
When we introduce the SPP and m_SLM≠0, the H and V polarized components of light will correspond to different LG modes. H polarized light is associated with LG mode of order |m_SPP+m_SLM⟩ whereas V polarized is associated with LG mode of order |m_SPP⟩ (Eqn. <ref>). In this case, the state is completely unpolarized (or mixed polarized). Therefore the DOP is minimum (0.05) and the linear entropy is maximum (0.99) which shows the non-separability of the state in polarization and LG mode. We calculated the DOP and linear entropy for each state shown in figure <ref> and<ref>. Since for each state we recorded more or less same DOP and linear entropy, therefore for simplicity, we shown the value of DOP and linear entropy of state 1/√(2)(|H⟩|-2⟩+e^iϕ|V⟩|2⟩) in Table <ref>. We also recorded the spatial distribution of the beam by projecting it to the different polarization states (Figure <ref> and <ref> ). Due to the non-separability of the state, OAM state of the beam varies with projection to different polarization states. Since we used SPP of order m=2, that's why the OAM associated with V polarized light is fixed. In order to achieve completely arbitrary NS state, one can either use SPP of higher order or replace the SPP with another SLM along with one HWP rotated at 45^∘ as explained in section <ref>.
§ CONCLUSION
In conclusion, we found a new method to generate classical non-separable states using polarization and OAM degrees of freedom. The new setup is simple as compared to other interferometry methods, and it does not require any fine-tuning of alignment. We verified the presence of non-separablity by measuring the degree of polarization and linear entropy. The results are also in good agreement with the theory. Since we could simultaneously encode polarization and OAM into a single photon, it will increase the information capacity per photon. Such a higher information capacity leads to higher secret key rates in quantum key distribution protocols. In this context, these hybrid states can be used as a powerful resource in many classical and quantum applications, such as in quantum communication, metrology, and quantum key distribution using quantum hybrid entangled states in particular.
54
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Horodecki et al.(2009)Horodecki, Horodecki, Horodecki, and Horodecki]horodecki2009quantum
author author R. Horodecki, author P. Horodecki, author M. Horodecki, and author K. Horodecki, title title Quantum entanglement, @noop journal journal Reviews of modern
physics volume 81, pages 865
(year 2009)NoStop
[Terhal(2002)]terhal2002detecting
author author B. M. Terhal, title title Detecting quantum
entanglement, @noop journal journal
Theoretical Computer Science volume 287, pages 313 (year 2002)NoStop
[Neves et al.(2009)Neves,
Lima, Delgado, and Saavedra]neves2009hybrid
author author L. Neves, author G. Lima,
author A. Delgado, and author C. Saavedra, title
title Hybrid photonic entanglement: Realization,
characterization, and applications, @noop journal
journal Physical Review A volume 80, pages 042322 (year 2009)NoStop
[Nagali and Sciarrino(2010)]nagali2010generation
author author E. Nagali and author F. Sciarrino, title title Generation of hybrid
polarization-orbital angular momentum entangled states, @noop
journal journal Optics express volume 18, pages 18243 (year
2010)NoStop
[Shen and Rosales-Guzmán(2022)]shen2022nonseparable
author author Y. Shen and author C. Rosales-Guzmán, title title Nonseparable
states of light: from quantum to classical, @noop journal journal Laser & Photonics Reviews volume 16, pages 2100533 (year
2022)NoStop
[Kwiat et al.(1995)Kwiat,
Mattle, Weinfurter, Zeilinger, Sergienko, and Shih]kwiat1995new
author author P. G. Kwiat, author K. Mattle,
author H. Weinfurter, author A. Zeilinger, author
A. V. Sergienko, and author
Y. Shih, title title New high-intensity source of polarization-entangled photon pairs, @noop journal journal Physical Review
Letters volume 75, pages 4337
(year 1995)NoStop
[Dada et al.(2011)Dada,
Leach, Buller, Padgett, and Andersson]dada2011experimental
author author A. C. Dada, author J. Leach,
author G. S. Buller, author M. J. Padgett, and author E. Andersson, title
title Experimental high-dimensional two-photon entanglement and
violations of generalized bell inequalities, @noop journal journal Nature Physics volume
7, pages 677 (year 2011)NoStop
[Hamel et al.(2014)Hamel,
Shalm, Hübel, Miller,
Marsili, Verma, Mirin,
Nam, Resch, and Jennewein]hamel2014direct
author author D. R. Hamel, author L. K. Shalm,
author H. Hübel, author A. J. Miller, author
F. Marsili, author V. B. Verma, author R. P. Mirin, author S. W. Nam, author K. J. Resch, and author T. Jennewein, title title Direct generation of three-photon
polarization entanglement, @noop journal journal Nature Photonics volume 8, pages 801 (year 2014)NoStop
[Chithrabhanu et al.(2016)Chithrabhanu, Reddy, Lal, Anwar, Aadhi, and Singh]chithrabhanu2016pancharatnam
author author P. Chithrabhanu, author S. G. Reddy, author N. Lal, author A. Anwar, author
A. Aadhi, and author
R. Singh, title title Pancharatnam phase in non-separable states of light, @noop
journal journal JOSA B volume 33, pages 2093 (year
2016)NoStop
[Spreeuw(1998)]spreeuw1998classical
author author R. J. Spreeuw, title title A classical analogy of
entanglement, @noop journal journal
Foundations of physics volume 28, pages 361 (year 1998)NoStop
[Ghose and Mukherjee(2014)]ghose2014entanglement
author author P. Ghose and author A. Mukherjee, title title Entanglement in
classical optics, @noop journal journal
Reviews in Theoretical Science volume 2, pages 274 (year 2014)NoStop
[Karimi and Boyd(2015)]karimi2015classical
author author E. Karimi and author R. W. Boyd, title title Classical entanglement?, @noop journal journal Science volume 350, pages 1172 (year
2015)NoStop
[Paneru et al.(2020)Paneru,
Cohen, Fickler, Boyd, and Karimi]paneru2020entanglement
author author D. Paneru, author E. Cohen,
author R. Fickler, author R. W. Boyd, and author E. Karimi, title
title Entanglement: quantum or classical?, @noop
journal journal Reports on Progress in Physics volume 83, pages 064001 (year 2020)NoStop
[Luis(2009)]luis2009coherence
author author A. Luis, title title Coherence, polarization, and
entanglement for classical light fields, @noop journal journal Optics Communications volume 282, pages 3665 (year
2009)NoStop
[Borges et al.(2010)Borges,
Hor-Meyll, Huguenin, and Khoury]borges2010bell
author author C. Borges, author M. Hor-Meyll,
author J. Huguenin, and author A. Khoury, title title Bell-like inequality for the spin-orbit
separability of a laser beam, @noop journal
journal Physical Review A volume 82, pages 033833 (year 2010)NoStop
[Pires et al.(2010)Pires,
Florijn, and Van Exter]pires2010measurement
author author H. D. L. Pires, author H. Florijn, and author M. Van Exter, title title Measurement of the
spiral spectrum of entangled two-photon states, @noop journal journal Physical review letters volume 104, pages 020505 (year
2010)NoStop
[Pereira et al.(2014)Pereira, Khoury, and Dechoum]pereira2014quantum
author author L. Pereira, author A. Khoury, and author K. Dechoum, title title Quantum and classical separability of
spin-orbit laser modes, @noop journal journal Physical Review A volume 90, pages 053842 (year 2014)NoStop
[Perumangatt et al.(2015)Perumangatt, Salla, Anwar, Aadhi, Prabhakar, and Singh]perumangatt2015scattering
author author C. Perumangatt, author G. R. Salla, author A. Anwar,
author A. Aadhi, author S. Prabhakar, and author R. Singh, title
title Scattering of non-separable states of light, @noop
journal journal Optics Communications volume 355, pages 301 (year
2015)NoStop
[Salla et al.(2015)Salla,
Perumangattu, Prabhakar, Anwar, and Singh]salla2015recovering
author author G. R. Salla, author C. Perumangattu,
author S. Prabhakar, author A. Anwar, and author
R. P. Singh, title title Recovering the vorticity of a light beam after scattering, @noop journal journal Applied Physics
Letters volume 107, pages 021104
(year 2015)NoStop
[Galvez et al.(2012)Galvez,
Khadka, Schubert, and Nomoto]galvez2012poincare
author author E. J. Galvez, author S. Khadka,
author W. H. Schubert, and author S. Nomoto, title title Poincaré-beam patterns produced by
nonseparable superpositions of laguerre–gauss and polarization modes of
light, @noop journal journal Applied
optics volume 51, pages 2925
(year 2012)NoStop
[Beckley et al.(2010)Beckley, Brown, and Alonso]beckley2010full
author author A. M. Beckley, author T. G. Brown, and author M. A. Alonso, title title Full poincaré beams, @noop
journal journal Optics express volume 18, pages 10777 (year
2010)NoStop
[Aiello et al.(2015)Aiello,
Töppel, Marquardt, Giacobino, and Leuchs]aiello2015quantum
author author A. Aiello, author F. Töppel,
author C. Marquardt, author E. Giacobino, and author G. Leuchs, title
title Quantum- like nonseparable structures in optical beams, @noop journal journal New Journal of
Physics volume 17, pages 043024
(year 2015)NoStop
[Ndagano et al.(2017)Ndagano, Perez-Garcia, Roux, McLaren, Rosales-Guzman, Zhang,
Mouane, Hernandez-Aranda, Konrad, and Forbes]ndagano2017characterizing
author author B. Ndagano, author B. Perez-Garcia, author F. S. Roux, author M. McLaren,
author C. Rosales-Guzman,
author Y. Zhang, author O. Mouane, author
R. I. Hernandez-Aranda, author
T. Konrad, and author
A. Forbes, title title Characterizing quantum channels with non-separable states of
classical light, @noop journal journal
Nature Physics volume 13, pages 397
(year 2017)NoStop
[Ndagano et al.(2018)Ndagano, Nape, Cox, Rosales-Guzman, and Forbes]ndagano2018creation
author author B. Ndagano, author I. Nape,
author M. A. Cox, author C. Rosales-Guzman, and author A. Forbes, title
title Creation and detection of vector vortex modes for
classical and quantum communication, @noop journal
journal Journal of Lightwave Technology volume 36, pages 292 (year 2018)NoStop
[Milione et al.(2015)Milione, Nguyen, Leach, Nolan, and Alfano]milione2015using
author author G. Milione, author T. A. Nguyen,
author J. Leach, author D. A. Nolan, and author R. R. Alfano, title
title Using the nonseparability of vector beams to encode
information for optical communication, @noop journal
journal Optics letters volume 40, pages 4887 (year 2015)NoStop
[Krenn et al.(2015)Krenn,
Handsteiner, Fink, Fickler, and Zeilinger]krenn2015twisted
author author M. Krenn, author J. Handsteiner,
author M. Fink, author
R. Fickler, and author
A. Zeilinger, title title Twisted photon entanglement through turbulent air across vienna, @noop journal journal Proceedings of the
National Academy of Sciences volume 112, pages 14197 (year 2015)NoStop
[Otte et al.(2018)Otte,
Rosales-Guzmán, Ndagano, Denz, and Forbes]otte2018entanglement
author author E. Otte, author C. Rosales-Guzmán, author B. Ndagano, author C. Denz, and author A. Forbes, title title Entanglement beating in free space through
spin–orbit coupling, @noop journal journal Light: Science & Applications volume
7, pages 18009 (year 2018)NoStop
[Töppel et al.(2014)Töppel, Aiello, Marquardt,
Giacobino, and Leuchs]toppel2014classical
author author F. Töppel, author A. Aiello,
author C. Marquardt, author E. Giacobino, and author G. Leuchs, title
title Classical entanglement in polarization metrology, @noop journal journal New Journal of
Physics volume 16, pages 073019
(year 2014)NoStop
[Magaña-Loaiza and Boyd(2019)]magana2019quantum
author author O. S. Magaña-Loaiza and author R. W. Boyd, title title Quantum imaging
and information, @noop journal journal
Reports on Progress in Physics volume 82, pages 124401 (year 2019)NoStop
[Daboul et al.(2003)Daboul,
Wang, and Sanders]daboul2003quantum
author author J. Daboul, author X. Wang, and author B. C. Sanders, title title Quantum gates on hybrid qudits, @noop journal journal Journal of Physics
A: Mathematical and General volume 36, pages 2525 (year 2003)NoStop
[Allen et al.(1992)Allen,
Beijersbergen, Spreeuw, and Woerdman]allen1992orbital
author author L. Allen, author M. W. Beijersbergen, author R. Spreeuw, and author J. Woerdman, title title Orbital angular momentum
of light and the transformation of laguerre-gaussian laser modes, @noop journal journal Physical review
A volume 45, pages 8185 (year 1992)NoStop
[Bai et al.(2022)Bai,
Lv, Fu, and Yang]bai2022vortex
author author Y. Bai, author H. Lv, author X. Fu, and author
Y. Yang, title title Vortex beam: generation and detection of orbital angular momentum, @noop journal journal Chinese Optics
Letters volume 20, pages 012601
(year 2022)NoStop
[Willner et al.(2015)Willner, Huang, Yan, Ren,
Ahmed, Xie, Bao,
Li, Cao, Zhao et al.]willner2015optical
author author A. E. Willner, author H. Huang,
author Y. Yan, author
Y. Ren, author N. Ahmed, author G. Xie, author C. Bao, author L. Li, author Y. Cao, author
Z. Zhao, et al., title title Optical communications using orbital angular
momentum beams, @noop journal journal
Advances in optics and photonics volume 7, pages 66 (year 2015)NoStop
[Forbes and Nape(2019)]forbes2019quantum
author author A. Forbes and author I. Nape, title title Quantum mechanics with patterns of
light: progress in high dimensional and multidimensional entanglement with
structured light, @noop journal journal
AVS Quantum Science volume 1 (year
2019)NoStop
[Rozenberg et al.(2023)Rozenberg, Karnieli, Yesharim,
Foley-Comer, Trajtenberg-Mills, Mishra, Prabhakar, Pratap, Freedman, Bronstein et al.]rozenberg2023designing
author author E. Rozenberg, author A. Karnieli,
author O. Yesharim, author J. Foley-Comer, author
S. Trajtenberg-Mills, author
S. Mishra, author S. Prabhakar, author R. Pratap, author D. Freedman, author A. M. Bronstein, et al., title title Designing nonlinear photonic crystals for high-dimensional quantum
state engineering, @noop journal journal arXiv preprint arXiv:2304.06810 (year
2023)NoStop
[Bacco et al.(2020)Bacco,
Cozzolino, Da Lio, Ding,
Rottwitt, and Oxenløwe]bacco2020quantum
author author D. Bacco, author D. Cozzolino,
author B. Da Lio, author Y. Ding, author
K. Rottwitt, and author
L. K. Oxenløwe, title
title Quantum communication with orbital angular momentum, in @noop booktitle 2020 22nd International
Conference on Transparent Optical Networks (ICTON) (organization IEEE, year 2020) pp. pages
1–4NoStop
[Cozzolino et al.(2019a)Cozzolino, Bacco,
Da Lio, Ingerslev, Ding,
Dalgaard, Kristensen, Galili,
Rottwitt, Ramachandran et al.]cozzolino2019orbital
author author D. Cozzolino, author D. Bacco,
author B. Da Lio, author K. Ingerslev, author
Y. Ding, author K. Dalgaard, author P. Kristensen, author M. Galili, author K. Rottwitt, author S. Ramachandran, et al., title
title Orbital angular momentum states enabling fiber-based
high-dimensional quantum communication, @noop journal journal Physical Review Applied volume 11, pages 064058 (year
2019a)NoStop
[Jabir et al.(2017)Jabir,
Apurv Chaitanya, Mathew, and Samanta]jabir2017direct
author author M. Jabir, author N. Apurv Chaitanya, author M. Mathew, and author G. Samanta, title title Direct transfer of
classical non-separable states into hybrid entangled two photon states, @noop journal journal Scientific
Reports volume 7, pages 1 (year 2017)NoStop
[Cozzolino et al.(2019b)Cozzolino, Polino, Valeri, Carvacho, Bacco, Spagnolo, Oxenløwe, and Sciarrino]cozzolino2019air
author author D. Cozzolino, author E. Polino,
author M. Valeri, author G. Carvacho, author
D. Bacco, author N. Spagnolo, author L. K. Oxenløwe, and author F. Sciarrino, title title
Air-core fiber distribution of hybrid vector vortex-polarization entangled
states, @noop journal journal Advanced
Photonics volume 1, pages 046005
(year 2019b)NoStop
[Slussarenko et al.(2010)Slussarenko, D’Ambrosio, Piccirillo,
Marrucci, and Santamato]slussarenko2010polarizing
author author S. Slussarenko, author V. D’Ambrosio, author B. Piccirillo, author L. Marrucci, and author E. Santamato, title title The polarizing sagnac
interferometer: a tool for light orbital angular momentum sorting and
spin-orbit photon processing, @noop journal
journal Optics Express volume 18, pages 27205 (year 2010)NoStop
[Maurer et al.(2007)Maurer,
Jesacher, Fürhapter, Bernet, and Ritsch-Marte]maurer2007tailoring
author author C. Maurer, author A. Jesacher,
author S. Fürhapter, author S. Bernet, and author
M. Ritsch-Marte, title
title Tailoring of arbitrary optical vector beams, @noop
journal journal New Journal of Physics volume 9, pages 78 (year
2007)NoStop
[Zhang et al.(2017)Zhang,
Li, Ma, Liu, Cheng, Han, and Zhao]zhang2017efficient
author author Y. Zhang, author P. Li, author C. Ma, author
S. Liu, author H. Cheng, author L. Han, and author J. Zhao, title title Efficient
generation of vector beams by calibrating the phase response of a spatial
light modulator, @noop journal journal
Applied optics volume 56, pages 4956
(year 2017)NoStop
[Liu et al.(2018)Liu,
Qi, Zhang, Li, Wu, Han, and Zhao]liu2018highly
author author S. Liu, author S. Qi, author Y. Zhang, author
P. Li, author D. Wu, author L. Han, and author J. Zhao, title title Highly efficient generation of
arbitrary vector beams with tunable polarization, phase, and amplitude, @noop journal journal Photonics
Research volume 6, pages 228
(year 2018)NoStop
[Guo et al.(2021)Guo,
Feng, Fu, and Min]guo2021generation
author author L. Guo, author Z. Feng, author Y. Fu, and author
C. Min, title title Generation of vector beams array with a single spatial light
modulator, @noop journal journal
Optics Communications volume 490, pages 126915 (year 2021)NoStop
[Chen and She(2010)]chen2010single
author author L. Chen and author W. She, title title Single-photon spin-orbit entanglement
violating a bell-like inequality, @noop journal
journal JOSA B volume 27, pages A7 (year 2010)NoStop
[Cardano et al.(2012)Cardano, Karimi, Slussarenko, Marrucci, de Lisio, and Santamato]cardano2012polarization
author author F. Cardano, author E. Karimi,
author S. Slussarenko, author L. Marrucci, author
C. de Lisio, and author
E. Santamato, title title Polarization pattern of vector vortex beams generated by q-plates
with different topological charges, @noop journal
journal Applied optics volume 51, pages C1 (year 2012)NoStop
[Fu et al.(2016)Fu,
Wang, and Gao]fu2016generating
author author S. Fu, author T. Wang, and author C. Gao, title title Generating perfect polarization vortices through
encoding liquid-crystal display devices, @noop journal journal Applied optics volume
55, pages 6501 (year 2016)NoStop
[Mandal et al.(2020)Mandal,
Maji, and Brundavanam]mandal2020common
author author A. Mandal, author S. Maji, and author M. M. Brundavanam, title title Common-path generation of stable
cylindrical perfect vector vortex beams of arbitrary order, @noop
journal journal Optics Communications volume 469, pages 125807 (year
2020)NoStop
[Beijersbergen et al.(1994)Beijersbergen, Coerwinkel, Kristensen, and Woerdman]beijersbergen1994helical
author author M. Beijersbergen, author R. Coerwinkel, author M. Kristensen, and author J. Woerdman, title title Helical-wavefront laser
beams produced with a spiral phaseplate, @noop journal journal Optics communications volume 112, pages 321 (year
1994)NoStop
[Zhu and Wang(2014)]zhu2014arbitrary
author author L. Zhu and author J. Wang, title title Arbitrary manipulation of spatial
amplitude and phase using phase-only spatial light modulators, @noop
journal journal Scientific reports volume 4, pages 7441 (year
2014)NoStop
[Gamel and James(2012)]gamel2012measures
author author O. Gamel and author D. F. James, title title Measures of quantum state
purity and classical degree of polarization, @noop journal journal Physical Review A volume 86, pages 033830 (year
2012)NoStop
[De Zela(2014)]de2014relationship
author author F. De Zela, title title Relationship between the
degree of polarization, indistinguishability, and entanglement, @noop
journal journal Physical Review A volume 89, pages 013845 (year
2014)NoStop
[Qian and Eberly(2011)]qian2011entanglement
author author X.-F. Qian and author J. Eberly, title title Entanglement and classical
polarization states, @noop journal journal Optics letters volume 36, pages 4110 (year 2011)NoStop
[Moreno et al.(2012)Moreno,
Davis, Hernandez, Cottrell, and Sand]moreno2012complete
author author I. Moreno, author J. A. Davis,
author T. M. Hernandez, author D. M. Cottrell, and author D. Sand, title
title Complete polarization control of light from a liquid
crystal spatial light modulator, @noop journal
journal Optics Express volume 20, pages 364 (year 2012)NoStop
|
http://arxiv.org/abs/2307.03903v2 | 20230708050310 | Adversarial Self-Attack Defense and Spatial-Temporal Relation Mining for Visible-Infrared Video Person Re-Identification | [
"Huafeng Li",
"Le Xu",
"Yafei Zhang",
"Dapeng Tao",
"Zhengtao Yu"
] | cs.CV | [
"cs.CV"
] |
Article Title]Adversarial Self-Attack Defense and Spatial-Temporal Relation Mining for Visible-Infrared Video Person Re-Identification
1]Huafeng [email protected]
These authors contributed equally to this work.
1]Le [email protected]
These authors contributed equally to this work.
[1]Yafei [email protected]
2]Dapeng [email protected]
1]Zhengtao [email protected]
[1]Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, Yunnan, China
[2]School of Information Science and Engineering, Yunnan University, Kunming, 650091, Yunnan, China
In visible-infrared video person re-identification (re-ID), extracting features not affected by complex scenes (such as modality, camera views, pedestrian pose, background, etc.) changes, and mining and utilizing motion information are the keys to solving cross-modal pedestrian identity matching. To this end, the paper proposes a new visible-infrared video person re-ID method from a novel perspective, i.e., adversarial self-attack defense and spatial-temporal relation mining. In this work, the changes of views, posture, background and modal discrepancy are considered as the main factors that cause the perturbations of person identity features. Such interference information contained in the training samples is used as an adversarial perturbation. It performs adversarial attacks on the re-ID model during the training to make the model more robust to these unfavorable factors. The attack from the adversarial perturbation is introduced by activating the interference information contained in the input samples without generating adversarial samples, and it can be thus called adversarial self-attack. This design allows adversarial attack and defense to be integrated into one framework. This paper further proposes a spatial-temporal information-guided feature representation network to use the information in video sequences. The network cannot only extract the information contained in the video-frame sequences but also use the relation of the local information in space to guide the network to extract more robust features. The proposed method exhibits compelling performance on large-scale cross-modality video datasets. The source code of the proposed method will be released at <https://github.com/lhf12278/xxx>.
[
[
August 12, 2023
===================
§ INTRODUCTION
Person re-identification (re-ID) is a technology used to determine whether the person images or sequences captured by non-overlapping cameras belong to the same identity <cit.>. Most of the related works, such as domain generalization <cit.> and domain adaptation<cit.>, focus on person re-ID under normal illumination (visible modality). In recent years, cross-modality person re-ID <cit.> based on visible and infrared images have attracted more and more attention since they can meet the requirements of pedestrian image matching under poor illumination at night.
The main difficulty faced by visible-infrared person re-ID is the modality discrepancy between pedestrian images of two modalities. Most methods <cit.> attempt to study how the features can be learned without being affected by the modality of images. These methods can be roughly divided into methods based on adversarial learning <cit.>, methods guided by intermediate modality <cit.>, and methods embedded with high-level semantic information <cit.>. The adversarial learning-based methods achieve modal confusion by adversarial training between the encoder and the discriminator, thereby reducing the difference between different modalities. The intermediate modality-based methods use the information of intermediate modality to guide or strengthen the role of modality invariant features in identity matching. The semantic information-based methods improve the cross-modality capability of features by introducing high-level semantic information into visual features.
The above methods only consider modal discrepancy's impact on person identity matching. However, some aspects are ignored, such as the diversity of pedestrian appearance features caused by view discrepancies, diverse postures of person, and background changes, etc. Moreover, they are designed for the matching between person images and do not consider the information contained in the video sequences. Therefore, if the existing cross-modality person re-ID methods are directly applied to the cross-modality video person re-ID, the retrieval performance may not be optimal. Although video-based person re-ID methods <cit.> are widely applied, they usually do not consider the relationship between different parts of the pedestrian's body under the motion state, which limits the further improvement of the recognition performance. Furthermore, most of the existing video person re-ID methods focus on the identity matching between video sequences under normal lighting conditions, ignoring the impact of the modal discrepancy between infrared and visible person images. Although Lin et al. <cit.> proposed a video-based cross-modality person re-ID method and created the first dataset for this task recently, there are still few studies involving the solution to this problem.
We propose a cross-modality video person re-ID method from a novel view—adversarial self-attack and defense. In the proposed method, we regard all unfavorable factors contaminating the model performance as adversarial perturbations. Factors such as the change of camera view, the existence of occluders, the difference in posture, and the gap between modalities lead to a certain diversity of appearance features of the same identity, as illustrated in Fig. <ref>.
We regard all the differences in appearance features of the same identity caused by all factors as information perturbations. The robustness of the re-ID model is strengthened by improving its defense ability against perturbations. Technically, the proposed method is mainly composed of the adversarial self-attack module (ASAM), adversarial defense module (ADM) and feature representation module under spatial-temporal information guidance (FRM-STIG). The ASAM is mainly used to activate the adversarial perturbations and implement the attack on the ADM. More specifically, the ASAM is used to guide the single-modality feature extraction network to activate the perturbations in the input training samples. With the effect of ASAM, the robustness of the ADM is enhanced in adversarial training of the ADM. The proposed method does not need to synthesize adversarial samples to train the model but activates the adversarial perturbations of the training samples to realize the adversarial attack.
In FRM-STIG, considering the discrimination of the spatial relationship of different body parts under the motion state, we propose to embed the temporal information contained in the video frames into spatial relations of different body parts. To effectively utilize spatial relation to improve the discrimination of features, we propose a spatial-temporal relation-guided feature representation method. More attention is paid to the features related to motion information and spatial relation. Thanks to this design, both the spatial relation of different body parts during motion and the temporal information can be embedded into the pedestrian features, which helps to improve the accuracy and robustness of person video sequence description. Finally, the features with motion information and the features generated by the guidance of spatial-temporal relations are combined as the final features to describe pedestrians.
The main contributions of this paper are as follows:
* A solution is proposed to solve the impact of modal discrepancy, posture changes, complex background and other factors on person identity matching. The complex perturbations carried by the multi-modality images are treated as the adversarial attack information of the re-ID model. At the same time, by improving the defense ability of the re-ID model against these perturbations, the robustness of the model to complex factors can be improved accordingly.
* An adversarial self-attack strategy is proposed to activate the perturbation information contained in the input samples without generating adversarial samples. This design allows adversarial attack and defense to be integrated into one framework.
* A spatial relation mining mechanism is proposed for different parts of a person based on temporal information embedding. A feature highlight mechanism guided by spatial-temporal relations is designed to construct features not affected by modality.
* The validity of the proposed method is verified on the challenging large-scale visible-infrared video re-ID dataset—VCM and the state-of-the-art performance is obtained under two commonly used evaluation metrics.
The rest of this paper is organized as follows. Section <ref> discusses the related state-of-the-art works. Section <ref> elaborates the proposed method in detail. The experimental results are explained in Section <ref> and Section <ref> summarizes the content of this paper and draws conclusions.
§ RELATED WORK
§.§ Visible-Infrared Person Re-ID
To solve the modal discrepancy between visible and infrared person images, Wang et al. <cit.> proposed an adversarial generation method to learn the modality-invariant feature. The encoder can extract the features not affected by the modality information via playing a min-max game between the encoder and the modality discriminator. Given the significance of adversarial learning, a series of practical methods have been conducted <cit.>. However, in those methods, the discriminator is used to identify the modal differences between visible and infrared person images, which may cause the loss of information related to personal identity and is not conducive to matching person identity. Another popular way to learn modality-invariant features is to use the intermediate information between two modalities as guidance <cit.>. Specifically, Zhong et al. <cit.> proposed the gray-scale image of a person as an intermediate modality to assist in extracting modality-invariant features.
Considering the modality invariance of edge details of a person image, Gao et al. <cit.> enhanced the cross-modal matching ability of features by highlighting the role of edge details in features. Basaran et al. <cit.> proposed to extract modality-invariant identity features by introducing the anaglyph. However, it ignores the high-level semantic information between different body parts or critical points of a person. Such information is usually modal invariant and often used in cross-modality person re-ID. Miao et al. <cit.> proposed a cross-modality person re-ID method based on high-order relationship mining of person key points. Chen et al.<cit.> proposed a modality-invariant feature extraction method by mining different part relationships. Those methods are devoted to extracting features not affected by modality, where the challenges to person identity matching caused by the diversity of person appearance features are not considered. The proposed method is based on adversarial self-attack and defense such that the changes in personal appearance features caused by all factors are deemed adversarial perturbations. The shortcomings of the above methods can be alleviated by improving the model's robustness against such perturbations.
§.§ Video Person Re-ID
Videos usually contain many motion information, which carries pedestrian identity clues. The video person re-ID has received more and more attention <cit.>. Wu et al. <cit.> proposed 3D ConvNet as a new layer of feature extraction network to extract a person's appearance and motion information from the video sequences simultaneously. Chen et al. <cit.> proposed spatial-temporal awareness to pay attention to the significant parts of a person in both temporal and spatial domains simultaneously and highlight the effect of this part in identity matching. Li et al.<cit.> proposed a global-local temporal representation (GLTR) method for video person re-ID. This method aggregates the short-term temporal cues and long-term relations as the final GLTR. Liu et al. <cit.> proposed a co-saliency spatial-temporal interaction network (CSTNet) for video person re-ID. The method learned discrimination feature representation by capturing the salient foreground regions in the video and exploring the spatial-temporal long-range context interdependency from such regions. Yang et al. <cit.> designed a two-stream dynamic pyramid representation model to solve the problems of mining spatial-temporal information, suppressing redundant information and improving data quality for video person re-ID. The method used dynamic pyramid deflated convolution and pyramid attention pooling to acquire the person's motion information and static appearance. Eom et al. <cit.> designed a spatial and temporal memory network to address the challenge of person occlusion by using prior knowledge that spatial distractors always appear in a particular location. In contrast, temporal distractors usually appear in the first few frames. Liu et al. <cit.> adopted a bi-directional (forward and backward) mechanism to extract the temporal information in the video sequence.
Although the above methods effectively utilized the motion information for person re-ID, they ignore the potential structural relationship of a person's body parts in space, limiting the further improvement of feature discrimination. Yan et al. <cit.> proposed multi-granular hypergraphs to mine the temporal information of different granularity regions. They modeled spatial-temporal dependencies in terms of multiple granularities, which effectively improved the performance of video person re-ID. Liu et al.<cit.> proposed a spatial-temporal correlation multi-scale topology learning framework to realize video person re-ID. The method achieved hierarchical spatial-temporal dependencies and pedestrian structure information through 3D and cross-scale graph convolution. To solve the problem that 3D convolution is easily affected by the misalignment of person features in mining temporal information, Chen et al. <cit.> proposed a human-oriented graph method. Although the above methods based on graph convolution can mine the spatial relationship between nodes, they cannot extract long-term spatial cues. Since the transformer is more suitable for extracting the long-term relationship of features, Zhang et al. <cit.> proposed a spatial-temporal transformer for video person re-ID. The method is mainly composed of a spatial transformer and a temporary transformer. The former is used to extract the spatial features of person images, and the latter is used to extract the features of a person in video sequences. Although these methods consider the static spatial structure relation between different person regions, they ignore the discrimination of different person body parts when moving. The proposed method embeds the temporal information into the spatial structure information mining, resulting in a spatial relation mining scheme for different body parts of pedestrians in the state of motion.
§.§ Adversarial Attack and Defense in Person Re-ID
Adversarial attacks are designed to deprive the original performance of the deep neural network by adding small-magnitude perturbations to original samples. The concept was first proposed by Szegedy et al. <cit.>. Wang et al. <cit.> developed a multi-stage network to perform back-box attack, given the importance of cross-dataset transferability in Re-ID. It pyramids the features at different levels to extract the general and transferable features for the adversarial perturbations. To explore whether the re-ID model based on CNN is vulnerable to the attack of adversarial samples, Wang et al. <cit.> proposed an attack method called advPattern to generate adversarial patterns on clothes. Those methods focus on generating adversarial samples to invalidate the re-ID model without considering how to defend against attacks from adversarial samples.
One of the easiest ways to improve the re-ID model's robustness to adversarial examples is to incorporate them into training. In addition, some researchers consider identifying and excluding the adversarial samples from the training dataset via the detection algorithm, which can also avoid the attack from the adversarial samples on the model. Specifically, Wang et al. <cit.> proposed a multi-expert adversarial attack detection method to detect adversarial attack by checking context inconsistency. To fill the gap between training samples and test samples, Bai et al. <cit.> developed an adversarial metric attack while presenting an early attempt to produce a metric-preserving network, thereby protecting metrics from adversarial attacks. To defend the model against the attack of the adversarial samples, although it is simple and effective to use the adversarial samples directly to train the model, it does not maximize the robustness of the model to the adversarial samples. In this paper, we elaborately devise an adversarial self-attack and defense approach that enables the model to defend against the impact of the diversity of person identity features on matching performance. Unlike the existing methods for generating adversarial samples, the proposed method replaces the role of the adversarial samples by activating the adversarial perturbations contained in the training samples. The proposed method integrates adversarial attack and defense within a single framework.
§ PROPOSED METHOD
§.§ Overview
The framework proposed in this paper is mainly composed of the adversarial self-attack module (ASAM), adversarial defense module (ADM) and feature representation module under spatial-temporal information guidance (FRM-STIG), as shown in Fig. <ref>. The ASAM is mainly used to activate perturbations in the training samples and achieve the re-ID model's adversarial training. The ADM extracts discrimination features from the sample in which the perturbations are activated. The FRM-STIG extracts the information carried in the sequence and uses them to enhance the effect of features related to motion information. To comprehensively use the information carried in the video sequences, the FRM-STIG integrates visual features with spatial-temporal information to accurately describe a person.
§.§ Adversarial Self-Attack Module
The ASAM is designed to enable the training samples to replace the role of the adversarial samples. It is implemented in a single ResNet50 framework. The ASAM module contains the Conv Layer, Intra-Modality Self-Attention (IMSA) Layer, Feature Encoder E_att, Global Average Pooling (GAP) Layer, and Batch Normalization (BN) Layer. The Conv Layer here refers to the first convolution layer of ResNet50. The E_att is composed of the last four layers of ResNet-50. The IMSA is used to highlight the role of the perturbations in the feature maps output by the Conv Layer. This Conv Layer generates perturbation information in the training samples to make the re-ID model yield its original performance. Compared with existing methods, ASAM does not need to generate new adversarial samples and only uses the original ones to achieve regular and adversarial training for the re-ID model. We denote the video sequence of different modality of the same identity V= { V_t∈ℝ^H × W} _t = 1^T and I = { I_t∈ℝ^H × W} _t = 1^T, where H and W represent the height and width of a single video frame, V and I represent a sequence of person video frames in visible and infrared modality, respectively. T is the total number of frames in a sequence, t means the index of the t-th frame. The results obtained by inputting the video sequence V and I into the Conv Layer can be expressed as:
F_t,c^V = E^V( V_t), F_t,c^I = E^I( I_t) (t=1,2, ⋯, T)
,
where F_t,c^V and F_t,c^I denote the features output by the Conv Layer. E^V and E^I are the encoders consisting of the first convolution layer of ResNet-50, and their parameters are not shared. The encoders E^I and E^V are respectively used to extract the shallow features of visible and infrared person images.
To highlight the role of the perturbations carried in F_t, c^V and F_t, c^I, we first send F_t, c^V and F_t, c^I to the IMSA layer, whose structure is shown in Fig. <ref> and the output results can be expressed as:
F̂_t^V = IMSA( F_t,c^V), F̂_t^I = IMSA( F_t,c^I)
.
To activate the perturbations in F_t,c^V and F_t,c^I such that they can replace the generation of adversarial examples, we send F̂_t^V and F̂_t^I to E_att. After the perturbation information is activated, the feature of the output by the E_att followed by GAP and BN should be misclassified by the pre-trained person identity classifier W_def of ADM.
To this end, we use the following identify loss function to optimize E^V, E^I and E_att:
ℓ_cov_id=-2/n_b(∑_i=1^n_b/2 q(log( W_def( f_i^V))+log( W_def( f_i^I)))
,
where W_def is a pre-trained person identity classifier, q=(1/M,1/M, … ,1/M)^T, M is the total number of person identifies in the training set, n_b is the number of video sequences in a batch, and
f_i^l= BN(GAP( E_att(F̂_1, i^l, F̂_2,i^l, ⋯, F̂_T,i^l)))
,
where F̂_t,i^l(t = 1, 2 ⋯, T; l = V, I; i = 1, 2, ⋯, n_b/2) is the feature map of the t-th frame of the i-th sequence in the modality l output by IMSA.
Minimizing Eq. (3) would activate the perturbations in the person image. In this paper, the perturbations are regarded as adversarial attack information. The activation helps improve the disturbing immunity of the defense network. In order to make the re-ID model robust to the diversity of pedestrian appearance features, f_i^l has been expected to practice adversarial attack and also related to the person identity. E^V, E^I, E_att are further updated by:
ℓ_att_id= - 2/n_b(∑_i = 1^n_b/2 q_i (log ( W_att( f_i^V))+ log ( W_att( f_i^I))))
,
where q_i is a one-hot vector representing the identity of f_i^V and f_i^I. W_att is the person identity classifier only used in ASAM.
§.§ Adversarial Defense Module
ASAM is to activate the perturbations in the training samples and replace the role of the adversarial samples in the adversarial training to improve the robustness of the defense network E_def against the perturbations. To make E_def more immune to attack from the perturbations, an Adversarial Defense Module (ADM) is designed. ADM is mainly composed of defense network E_def, cross-modality cross-attention (CMCA) layer, GAP and BN layers, as shown in Fig. <ref>. The main task of the ADM is to
endow E_def with strong defense ability against the perturbations. In cross-modality person re-ID, modal-invariant features play a positive role in promoting the matching accuracy of person identities. Therefore, the features extracted by E_def contain rich information on different modalities, which would be helpful to defend against attacks from feature perturbation. A CMCA layer is embedded in the ADM, as shown in Fig. <ref>.
As shown in Fig. <ref>, there are two CMCA layers in the ADM, one embedded after the third convolution layer of E_def, and the other embedded after the last convolution layer of E_def. E_def3 is composed of the first three convolution layers of E_def as an encoder. After the perturbations is activated by the ASAM, the feature maps F_t,c^V and F_t,c^I of the t-th frame are sent to the encoders E_def3 and E_def, the results are:
F_t,d3^V= E_def3( F_t,c^V), F_t,d3^I= E_def3( F_t,c^I)
F_t,d^V= E_def( F_t,c^V),
F_t,d^I= E_def( F_t,c^I)
.
After F_t, d3^V, F_t, d3^I, F_t, d^V and F_t, d^I are input into CMCA layer, the results can be expressed as:
F̅_t,d3^V = ConLa_4 (CMCA( F_t, d3^V, F_t, d3^I))
F̅_t,d3^I = ConLa_4 (CMCA( F_t, d3^I, F_t, d3^V))
F̅_t,d^V = CMCA( F_t, d^V, F_t, d^I)
F̅_t,d^I = CMCA ( F_t, d^I, F_t, d^V)
,
where ConLa_4 denotes the last convolution layer of E_def.
The common information can be extracted by embedding the CMCA layer in E_def. The first CMCA layer enables E_def to extract discrimination feature maps with the common information on a shallow convolution layer. The second CMCA layer is used to ensure that the feature maps extracted by E_def contain common information for identity matching. To integrate the complementary information existing in F̅_t,d3^l and F̅_t,d^l (l=V,I) and realize the accurate description of person appearance features, we fuse F̅_t,d3^l and F̅_t,d^l (l=V,I), respectively, and the fused results are sent to the GAP and BN layers.
The feature vectors obtained are:
f_d^V = BN(GAP((F̅_1,d3^V + F̅_1,d^V)/2, ⋯, (F̅_T,d3^V + F̅_T,d^V)/2))
f_d^I = BN(GAP((F̅_1,d3^I + F̅_1,d^I)/2, ⋯ ,(F̅_T,d3^I + F̅_T,d^I)/2))
.
To ensure that f_d^V and f_d^I have strong discrimination, we use the identity loss to optimize E_def:
ℓ_def_id =- 2/n_b(∑_i = 1^n_b/2 q_i(log (W_def(f_d,i^V))+ log ( W_def( f_d,i^I))
))
,
where f_d, i^V and f_d,i^I represent the features of the i-th sequence in the visible and infrared modality, respectively. The triplet loss is used to solve the hard samples problem:
ℓ_def_tri= 1/n_b∑_i = 1^n_b[ f_d,i^a - f_d,i^p_2^2 - f_d,i^a - f_d,i^n_2^2 + α] _ + ,
where [∙]_+=max{0, ∙}, f_d,i^a and f_d,i^p represent the features of the i-th anchor sample sequence and the hard positive sample sequence with the same identity in a mini-batch. f_d,i^n is a hard negative sample sequence with different identity from f_d,i^a. α denotes the margin (>0, empirically set to 0.3 in this work). The features f_d,i^a, f_d,i^p and f_d,i^n are generated by Eq. (8).
§.§ Feature Representation under Spatial-temporal Information Guidance
The video sequence of a person contains a lot of motion information, which is not affected by the modality changes. In a single pedestrian image, there is a latent spatial relation between different regions of the pedestrian's body, and different pedestrians usually show different relations. In order to guide the discrimination features learning, a spatial-temporal information mining approach is proposed, as shown in Fig. <ref>.
The proposed method is mainly composed of two parts: 1) feature highlighting guided by spatial relation (FH-G-SR) and 2) feature representation embedded by temporal information (FR-E-TI). To highlight the discrimination feature with spatial relation guidance, we first use PCB <cit.> to divide the feature map F_t, d^V and F_t, d^I into K different patches in space, which are converted into feature vectors. The feature vectors of the k-th patches of F_t,d^V and F_t,d^I are denoted as f_t, d^V, k and f_t, d^I, k (k = 1,2, ⋯, K). Based on the experience of PCB, K is set to 6 in this paper. As can be seen in Fig. <ref>, the video sequence features { f_1, d^l, k, ⋯, f_T, d^l,k} are sent into LSTM_mot to obtain the features embedded with motion information, expressed as {f̃_1, d^l,k, ⋯ , f̃_T, d^l,k}. Since the potential spatial relation between patches at the different positions is not involved, a sequence of different patches (of the same frame) is formed and input into LSTM_spa for spatial relationship mining between different patches. Considering that after the last frame passes through LSTM_mot, the obtained features integrate the information of all previous frames, we only form a sequence {f̃_T, d^l,1, ⋯ , f̃_T, d^l,K}(l = V, I) of all patch features of the T-th frame and send it to LSTM_spa.
§.§.§ FH-G-SR
The result f̅_T, d^l, K obtained after feeding {f̃_T, d^l,1, ⋯ , f̃_T, d^l,K}(l = V, I) into LSTM_spa is the final spatial feature representation. To effectively utilize the spatial relations and the information carried by f_t,d^l,k to highlight discrimination features, we concatenate f̅_T, d^l, K and the original visual feature f_t, d^l, k:
f̂_t,d^l,k=concat( f_t,d^l,k, f̅_T,d^l,k)
.
As shown in Fig. <ref>, after f̂_t,d^l,k passes through the linear mapping (LM) layer, ReLU activation function, the other LM layer, and the Sigmoid activation function, the corresponding weight matrix for features highlighting is obtained by:
A_t, d^l, k=Sigmoid(LM(ReLU(LM(f̂_t,d^l,k))))
.
With A_t, d^l, k, the feature f_t, d^l, k highlighted by spatial relation guidance is:
ḟ_t, d^l, k= f_t, d^l, k⊙ A_t, d^l, k.
§.§.§ FR-E-TI
Although ḟ_t, d^l, k makes use of the spatial relation between patches, it is only the visual feature of a person, and it does not integrate the motion information contained in the video sequence. Therefore, we embedded the features {f̃_1, d^l,k, ⋯, f̃_T, d^l,k} carrying motion information into the enhanced features in Eq. (13):
f̈_t, d^l,k = ḟ_t, d^l, k + f̃_t, d^l,k.
GAP is used to achieve the fusion of T frame features {f̈_1, d^l, k, ⋯, f̈_T, d^l, k}, and the feature representation of the k-th patch of modality l can be obtained:
f̈_d^l,k = GAP(f̈_1, d^l,k, ⋯, f̈_T, d^l,k)
.
Finally, we concatenate the features of all patches together according to their spatial positions on the image to form a complete person representation f̈_d^l (l = V, I). In order to ensure its discrimination, the cross entropy loss is deployed:
ℓ_p_id= - 2/n_b∑_i = 1^n_b/2 q_ilog( W_se( f̈_d, i^V))+ q_ilog( W_se( f̈_d, i^I))
,
where W_se is the identity classifier of f̈_d, i^l which is generated via Eq. (15). f̈_d, i^l denotes the sequence feature of the i-th video sequence in one batch.
The overall loss function in the proposed approach is:
ℓ_total = ℓ_cov_id+ λ _1 ℓ_att_id+ λ _2 (ℓ_def_id+ ℓ_def_tri) +λ _3 ℓ_p_id,
where λ _1, λ _2, λ _3 are hyper-parameters, which are used to adjust the role of the corresponding loss items. The processes are summarized in Algorithm <ref>.
§ EXPERIMENTS
§.§ Experimental Settings
The dataset used in this experiment is a large-scale cross-modal video-based person re-ID dataset—VCM proposed by Lin et al. <cit.>, which is the first and only one currently constructed for the visible-infrared video person re-ID task. The dataset is recorded by 12 non-overlapping HD cameras and consists of 251,452 visible images and 211,807 infrared images with a resolution of 3,840 × 2,170. These images are further divided into 11,785 sequences in the visible modality and 10,078 sequences in the infrared modality. The dataset contains 927 identities, where 232,496 images of 500 identities involving a total of 11,061 sequences are used for training, and the remaining 230,763 images of 427 identities involving a total of 10,802 sequences are used for testing.
All experiments are carried out on a PC equipped with an NVIDIA TESLA A100 GPU in the Pytorch 1.10 framework <cit.>. In the training phase, all input images are adjusted to 288 × 144. The batch size is set to 32 (i.e., 32 sequences are processed in a mini-batch). In each epoch, 16 sequences of each modality enter the model for training (containing eight identities, each containing two sequences). Each sequence consists of 6 frames, a total of 192 frames. The model is trained for 200 epochs (each of which contains 268 iterations). The first 150 epochs are used to train E^V, E^I, E_att, E_lstm, W_att and W_def. For the remaining 50 epochs, we fix W_def and fine-tune E_def to further enhance the network's defense capability. The entire training is realized by using SGD optimizer with the momentum of 0.9, weight decay of 5 × 10^-4 and learning rate of 0.12. A warm-up strategy <cit.> is applied to tune the learning rate linearly. Cumulative Matching Curve (CMC) <cit.> and mean Average Precision (mAP) <cit.> are used as the evaluation metrics for model performance.
§.§ Ablation Study
The proposed method consists of an adversarial self-attack module (ASAM), adversarial defense module (ADM), and feature representation module under spatial-temporal information guidance (FRM-STIG). E^V, E^I and E_def trained by cross-entropy loss and triplet loss are regarded as “Baseline”. The “Baseline” is pre-trained on the ImageNet <cit.> before training on the dataset VCM. We denote the method of adding ASAM to the baseline as “Baseline+ASAM”, similarly, we obtain “Baseline+ADM”, “Baseline+FRM-STIG” and “Baseline+ASAM+ADM”. When FH-G-SR is removed from FRM-STIG, and the remaining content is added to “Baseline+ASAM+ADM”. Such method is noted “Baseline+ASAM+ADM+FR-E-TI”. The complete proposed model is marked “Baseline+ASAM+ADM +FRM-STIG”. Furthermore, in order to verify the contribution of LSTM_spa, LSTM_spa is removed from “Baseline+ASAM+ADM+FRM-STIG” and the corresponding model is denoted as “Baseline+ASAM+ADM+FRM-STIG*”. The results of ablation experiment are reported in Table <ref>.
Effectiveness of ASAM. To verify the effect of the ASAM, we add the ASAM to the “Baseline”, and obtain the model “Baseline+ASAM”. One can see in Table <ref> that, on the “Infrared to Visible” task, Rank-1 and mAP achieved by “Baseline+ASAM” decrease by 14.4% and 12.57%, respectively. For the task of “Visible to Infrared”, Rank-1 and mAP are reduced by 15% and 14.96%, respectively. These indicate the attacks from the perturbations activated in the shallow features.
Effectiveness of ADM. In Table <ref>, Rank-1 and mAP accuracy of “Baseline+ADM” on the task of querying the visible sequence from the infrared sequence (i.e., Infrared to Visible) is 58.92% and 44.50%, respectively. Compared with that of “Baseline”, the performance is improved by 0.87% and 1.27%, respectively. On the task of “Visible to Infrared” task, the accuracy of Rank-1 and mAP reaches 62.58% and 46.55%, respectively. Compared with that of “Baseline”, the performance of “Baseline+ADM” is improved by 1.57% and 0.96%. It implies that the ADM still has a positive effect on the model performance improvement when the ASAM is absent. It can be also observed that when ASAM and ADM are added to “Baseline” together, the performance of “Baseline+ASAM+ADM” is improved. Compared with “Baseline+ADM”, Rank-1 and mAP of “Baseline+ASAM+ADM” on “Infrared to Visible” task are increased from 58.92% and 44.50% to 60.27% and 46.09%. On “Visible to Infrared” task, the performance of Rank-1 and mAP are improved from 62.58% and 46.55% to 63.01% and 48.05%. These demonstrates the effectiveness of the adversarial training.
Effectiveness of FR-E-TI. FH-G-SR is removed from FRM-STIG to evaluate the validity of FR-E-TI with temporal information embedding. It can be seen in Table <ref> that when the FR-E-TI is added to “Baseline+ASAM+ADM”, Rank-1 and mAP of the model “Baseline+ASAM+ADM+FR-E-TI” on “Infrared to Visible” (“Visible to Infrared”) task are improved from 60.27% and 46.09% (63.01% and 48.05%) to 63.92% and 48.56% (66.42% and 50.40%), respectively. The improvement verifies the validity of FR-E-TI when FH-G-SR is absent.
Effectiveness of FRM-STIG.
It can be seen in Table <ref> that with FRM-STIG, the performance of “Baseline+FRM-STIG” on the “Infrared to Visible” (“Visible to Infrared”) task, the accuracy of Rank-1 and mAP increases from 58.05% and 43.23% (61.13% and 44.80%) to 61.01% and 45.59% (63.74% and 46.34%) respectively. For the same task, after FRM-STIG is added to “Baseline+ASAM+ADM”, the accuracy of the model “Baseline+ASAM+ADM+FRM-STIG” on Rank-1 and mAP increases from 60.27% and 46.09% (63.01% and 48.05%) to 65.31% and 49.49% (67.66% and 51.76%) respectively. It verifies the contribution of FRM-STIG. Moreover, compared with the performance of “Baseline+ASAM+ADM+FR-E-TI”, Rank-1 and mAP of “Baseline+ASAM+ADM+FRM-STIG” on “Visible to Infrared” (“Infrared to Visible”) are improved from 63.92% and 48.56% (66.42% and 50.40%) to 65.31% and 49.49% (67.66% and 51.76%). It demonstrates the validity of FH-G-SR. Further, Rank-1 and mAP of “B+ASAM+ADM+FRM-STIG*” on “Visible to Infrared” (“Infrared to Visible”) decrease by 1.24% and 0.7% (1.88% and 1.19%) compared with those of “Baseline+ASAM+ADM+FRM-STIG”. It verifies the contribution of LSTM_spa.
The visual effect of different settings in the ablation experiment on the retrieval results is shown in Fig. <ref>. From the results shown in Fig. <ref>, one can
see that the retrieval accuracy has improved when the ADM is added to the “Baseline”. It found that when ASAM and ADM are added to the “Baseline” together, the model
performance is visually improved. It indicates that the adversarial attack and defense strategies proposed are effective. Besides, when FRM-STIG is added to “Baseline+ASAM+ADM”, the matching accuracy of sequences is further improved. Fig. <ref> shows the areas focused by “Baseline” and the proposed method, where the warmer the color is, the more attention the area receives. Those results indicate that the proposed method can better extract discriminative features from the person's body area than the Baseline.
§.§ Comparison with State-of-the-Arts
In order to verify the superiority of the proposed method over the existing methods, it is compared with LbA <cit.>, MPANet <cit.>, DDAG <cit.>, VSD <cit.>, CAJL <cit.>, MITML <cit.>, where the first five methods are designed for image-based visible-infrared person re-ID and the last one is for visible-infrared video person re-ID. Since the first five methods are proposed for the single-frame visible-infrared person image matching task, we remove FRM-STIG for comparison and such method “Proposed*” in Table <ref>. Rank1 and mAP of “Proposed*” are 60.27% and 46.09% (63.01% and 48.05%) on “Infrared to Visible” (“Visible to Infrared”), which are 3.68% and 4.60% (2.88% and 5.24%) higher than those of the sub-optimal image-based method CAJL. Compared with the latest video person re-ID method MITML, Rank-1 and mAP of the proposed method are increased from 63.74% and 45.31% (64.54% and 47.69%) to 65.31% and 49.49% (67.66% and 51.76%) on the “Infrared to visible” (“Visible to Infrared”) task. It shows that the proposed method outperforms all compared ones.
§.§ Parameter Analysis
In Eq. (17), three hyper-parameters λ_1, λ_2 and λ_3 need to be set. In this section, we discuss the influence of one hyper-parameter by fixing
the other two parameters. The performance of the proposed method with different hyper-parameters is shown in Fig. <ref>.
The influence of λ_1.
Fig. <ref> (a) and (b) show the effect of λ _1 when it changes in [0.1,9]. On the task of “Infrared to Visible”, the proposed method achieves the best performance when λ _1 = 1, and the performance degenerates when λ _1>1. On the task of “Visible to Infrared”, the method in this paper shows insensitivity to the change of λ _1 value. Therefore, we set λ _1 to 1 in our method.
The influence of λ_2.
Fig. <ref> (c) and (d) show the changes of the model performance when λ _2 changes from 0.01 to 4. On task of “Infrared to Visible”, when λ _2 = 0.5 , the proposed method achieves the highest recognition accuracy. On “Visible to Infrared” task, when λ _2 = 0.1, the proposed method achieves the highest recognition accuracy, and when λ _2 > 0.5, the experimental performance performs a significant downward trend on both of tasks. In this work, we set λ _2 to 0.5 for both tasks.
The influence of λ_3.
Fig. <ref> (e) and (f) show the changes in performance of the proposed algorithm on “Infrared to Visible” and “Visible to Infrared” tasks, when λ _3 varies between 0.1 and 5. As indicated in Fig. <ref> (e) and (f), for both recognition tasks, the recognition performance of the proposed method reaches its peak when the value of λ_3 reaches 0.5. Therefore, we set λ_3 to 0.5 throughout the experiments.
§ CONCLUSION
An adversarial self-attack defense and feature representation module under spatial-temporal information guidance is proposed for the diversity of pedestrian appearance features on pedestrian identity matching. The method consists of the ASAM, the ADM, and the FRM-STIG. Through the cooperative training of the ASAM and the ADM, the defense network's defense capability has been improved in the face of identity-related perturbation. The proposed method is robust to modality differences and feature changes caused by other factors. In addition, FRM-STIG utilizes each local feature effectively through a spatial relationship-guided highlight mechanism. The experimental results show that the proposed method outperforms the compared SOTA methods.
Acknowledgments
This work was supported by the National Natural Science Foundation of China under Grants 62276120, 61966021 and 62161015.
§ DECLARATIONS
Conflict of interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
§ DATA AVAILABILITY STATEMENT
The datasets for this study can be found in the VCM dataset <https://github.com/VCM-project233/MITML>.
|
http://arxiv.org/abs/2307.04911v2 | 20230710213002 | CLASSY VII Lyα Profiles: The Structure and Kinematics of Neutral Gas and Implications for LyC Escape in Reionization-Era Analogs | [
"Weida Hu",
"Crystal L. Martin",
"Max Gronke",
"Simon Gazagnes",
"Matthew Hayes",
"John Chisholm",
"Timothy Heckman",
"Matilde Mingozzi",
"Namrata Roy",
"Peter Senchyna",
"Xinfeng Xu",
"Danielle A. Berg",
"Bethan L. James",
"Daniel P. Stark",
"Karla Z. Arellano-Córdova",
"Alaina Henry",
"Anne E. Jaskot",
"Nimisha Kumari",
"Kaelee S. Parker",
"Claudia Scarlata",
"Aida Wofford",
"Ricardo O. Amorín",
"Naunet Leonhardes-Barboza",
"Jarle Brinchmann",
"Cody Carr"
] | astro-ph.GA | [
"astro-ph.GA"
] |
AASJournal ApJ
Hu et al.
CLASSY Profiles
0000-0003-3424-3230]Weida Hu
Department of Physics, University of California, Santa Barbara, Santa Barbara, CA 93106, USA
0000-0001-9189-7818]Crystal L. Martin
Department of Physics, University of California, Santa Barbara, Santa Barbara, CA 93106, USA
0000-0003-2491-060X]Max Gronke
Max-Planck Institute for Astrophysics, Karl-Schwarzschild-Str. 1, D-85741 Garching, Germany
0000-0002-5659-4974]Simon Gazagnes
Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX 78712, USA
0000-0001-8587-218X]Matthew Hayes
Stockholm University, Department of Astronomy and Oskar Klein Centre for Cosmoparticle Physics, AlbaNova University Centre, SE-10691, Stockholm, Sweden
0000-0002-0302-2577]John Chisholm
Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX 78712, USA
0000-0003-1127-7497]Timothy Heckman
Center for Astrophysical Sciences, Department of Physics & Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA
0000-0003-2589-762X]Matilde Mingozzi
Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
0000-0002-4430-8846]Namrata Roy
Center for Astrophysical Sciences, Department of Physics & Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA
0000-0002-9132-6561]Peter Senchyna
Carnegie Observatories, 813 Santa Barbara Street, Pasadena, CA 91101, USA
0000-0002-9217-7051]Xinfeng Xu
Center for Astrophysical Sciences, Department of Physics & Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA
0000-0002-4153-053X]Danielle A. Berg
Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX 78712, USA
0000-0003-4372-2006]Bethan L. James
AURA for ESA, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
0000-0001-6106-5172]Daniel P. Stark
Steward Observatory, The University of Arizona, 933 N Cherry Ave, Tucson, AZ, 85721, USA
0000-0002-2644-3518]Karla Z. Arellano-Córdova
Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX 78712, USA
0000-0002-6586-4446]Alaina Henry
Center for Astrophysical Sciences, Department of Physics & Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA
Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
0000-0002-6790-5125]Anne E. Jaskot
Department of Astronomy, Williams College, Williams town, MA 01267, USA
0000-0002-5320-2568]Nimisha Kumari
AURA for ESA, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
0000-0002-8809-4608]Kaelee S. Parker
Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX 78712, USA
0000-0002-9136-8876]Claudia Scarlata
Minnesota Institute for Astrophysics, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455, USA
0000-0001-8289-3428]Aida Wofford
Instituto de Astronomía, Universidad Nacional Autónoma de México, Unidad Académica en Ensenada, Km 103 Carr. Tijuana-Ensenada, Ensenada 22860, Mexico
0000-0001-5758-1000]Ricardo O. Amorín
Instituto de Investigación Multidisciplinar en Ciencia y Tecnología, Universidad de La Serena, Raul Bitrán 1305, La Serena 2204000, Chile
Departamento de Astronomía, Universidad de La Serena, Av. Juan Cisternas 1200 Norte, La Serena 1720236, Chile
Wellesley College, 106 Central Street, Wellesley, MA 02481, USA
0000-0003-4359-8797]Jarle Brinchmann
Instituto de Astrofísica e Ciências do Espaço, Universidade do Porto, CAUP, Rua das Estrelas, PT4150-762 Porto, Portugal
Minnesota Institute for Astrophysics, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455, USA
Lyman-alpha line profiles are a powerful probe of ISM structure, outflow speed, and Lyman continuum escape fraction.
In this paper, we present the line profiles of the COS Legacy Archive Spectroscopic SurveY, a sample rich in
spectroscopic analogs of reionization-era galaxies. A large fraction of the spectra show a complex profile, consisting
of a double-peaked emission profile in the bottom of a damped, absorption trough. Such profiles reveal an
inhomogeneous interstellar medium (ISM). We successfully fit the damped absorption (DLA) and the emission
profiles separately, but with complementary covering factors, a surprising result because this approach requires no
exchange between high-N_HI and low-N_HI paths. The combined distribution of column densities
is qualitatively similar to the bimodal distributions observed in numerical simulations. We find an inverse relation
between peak separation and the [O iii]/[O ii] flux ratio, confirming that the covering fraction
of Lyman-continuum-thin sightlines increases as the peak separation decreases. We combine measurements of
peak separation and red peak asymmetry in a diagnostic diagram which identifies six Lyman continuum leakers in
the CLASSY sample. We find a strong correlation between the trough velocity and the outflow velocity measured from
interstellar absorption lines. We argue that greater vignetting of the blueshifted peak, relative to the
redshifted peak, is the source of the well-known discrepancy between shell-model parameters and directly measured
outflow properties. The CLASSY sample illustrates how scattering of photons outside the spectroscopic aperture
reshapes profiles as the distances to these compact starbursts span a large range.
§ INTRODUCTION
The Epoch of Reionization (EoR) marks a period in the history of the Universe when the emergence of galaxies
ionized most of the neutral hydrogen in the intergalactic medium (IGM). Observations suggest that the first
ionized pockets in the IGM grew around the largest overdensities of galaxies <cit.>. The massive stars in those galaxies are likely the source of the ionizing photons,
the Lyman continuum (LyC) at wavelengths λ<912 Å <cit.>.
How this ionizing radiation leaks out of the dense structures where early galaxies form, however, is not well
understood. A small column density of neutral hydrogen, N_HI≈ 1.6 × 10^17 cm^-2, will absorb a
LyC photon. Exactly how feedback from massive stars opens pathways for LyC escape <cit.>
sets the timeline for cosmic reionization <cit.>.
Direct observations of the escaping LyC photons are not possible during the EoR because of attenuation by
the IGM <cit.>, so indirect tracers LyC escape and outflows are needed.
Lyman-α is the most commonly detected emission line from high-redshift galaxies <cit.>.
The channels through which photons emerge from galaxies appear to be tightly related to the pathways
of LyC escape <cit.>
because the origins of photons, H ii regions, are illuminated by the LyC photons arising from central massive stars.
Even low column densities of neutral hydrogen in these channels
scatter photons many times, altering their direction and frequency. Their random walk redistributes
photons flux from the line core into the line wings, and this reshaping of the line profile imprints information
about the outflow velocity, column density, and ISM structure on the emergent line profile
<cit.>.
In the absence of absorption by dust, all the photons eventually escape from the galaxy, and radiative
transfer calculations demonstrate some general properties of the line profiles. For example,
analytic solutions for static slabs and spheres yield emerging spectra with symmetric redshifted and
blueshifted peaks <cit.>. Bulk motion requires Monte Carlo techniques,
and these calculations demonstrate that outflowing gas produces an asymmetric profile which has a
stronger redshifted component regardless of the outflow geometry and structure <cit.>.
The most commonly applied radiative transfer model, the homogeneous shell model, assumes an expanding,
spherical shell of neutral hydrogen <cit.>. Over a wide range of outflow properties, the emergent
line profile has a P Cygni shape characterized by a redshifted emission line with a broad red wing plus a blueshifted
absorption trough. For a very low H i column density, some emission from the near side of the thin shell
is transmitted, producing blueshifted emission instead of absorption. Whereas a very high column density shell
will trap a photon until it is eventually absorbed by a dust grain, and the emergent line profile has become
that of a damped absorber (DLA), a completely-dark absorption trough with very broad wings. The shell model
does a good job of reproducing the diversity of commonly observed profile shapes
<cit.>. Statistically successful fits, however,
do not guarantee accurate recovery of outflow properties. The structure of the shell model is much simpler than
actual ISM and multi-phase outflows <cit.>.
Low-ionization state (LIS) absorption lines in galaxy spectra unambiguously detect outflowing gas and have provided insight
into how outflow properties vary with galaxy properties <cit.>.
The outflow speeds derived from the blueshifts of these absorption lines offer an opportunity to test the shell model
velocities, and the results reveal significant discrepancies both at high-redshift <cit.> and among nearby Green
Pea galaxies <cit.>.
Three major discrepancies are reported in those studies: (1) the best-fit redshifts are larger by 10–250 km s^-1 than the spectroscopic redshifts; (2) the best-fit outflow velocities of expanding shell are lower than the outflow velocities derived by LIS lines; (3) the intrinsic line widths of shell model are broader than those of Balmer lines.
<cit.> proposed that those discrepancies might be caused by the degeneracies between model parameters, but no explanation for these puzzles based on observations has been found.
We also draw attention to another limitation of the shell model. A large fraction of Green Peas and higher-redshift
star-forming galaxies show emission line in the bottom of a DLA system <cit.>.
These profiles cannot be produced by a homogeneous shell model. The low column density shells that produce double-peaked
profiles contradict the presence of damped absorption which requires very high column density.
Even larger peak separations are predicted by a clumpy shell because the fitted shell expansion speed lies between the outflow velocities of the neutral clouds and the hot interclump medium <cit.>.
Comparing physical properties derived from shell modeling to those measured from other spectral lines can
therefore provide new insight about the structure of the multi-phase gas. Because these properties determine the LyC
escape fraction from galaxies, there is an urgent need to understand the puzzling properties of profiles
in a sample of EoR analogs. To place the unexpected profile shapes, i.e. the double peaked emission lines in DLA systems,
in the broader context of the full diversity of observed profile shapes, requires high-resolution and high S/N ratio
UV spectroscopy of EoR analogs, including, but not limited to, Green Pea galaxies. The James Webb Space Telescope (JWST)
observations reveal a diversity of galaxies in the EoR <cit.>,
spanning much wider ranges of galaxy properties than the local Green Peas.
In this paper, we analyze 45 line profiles obtained by the COS Legacy Archive Spectrocopy SurveY (CLASSY)
<cit.>. This UV-surface brightness selected sample includes the lowest redshift Green Pea galaxies,
local Lyman Break Galaxy Analogs (LBAs) <cit.>,
and the two local galaxies that are the nearest spectral match to the emission-line spectra of GN-z11
<cit.>. Thus the range in metallicity and ionizing continuum properties include
the extreme conditions that were common during galaxy assembly. We present a uniform analysis of the profiles.
Outflow properties have been determined from the blueshifted components of the LIS resonance lines <cit.>
and the excited fine-structure lines <cit.>. The results provide new insight into the clumpiness of the ISM,
as described by the relative covering fractions of high-N_HI and low-N_HI gas, yet also strongly suggest that the
discrepancies between shell model parameters and LIS absorption lines arise from aperture vignetting. Among CLASSY targets,
the physical size of COS aperture ranges from the scale of star clusters (∼ 100 pc) to galaxies (∼ 10 kpc).
The large variations in aperture losses make it possible to view individual profile shapes in a broader context.
This paper is organized as follows. In Sec. 2, we introduce the CLASSY sample of profiles,
describe how we remove the damped absorption and measure the properties of the high column density
neutral gas, and discuss the large variation in the amount of aperture vignetting across the sample. In Sec. 3, we use the radiative transfer code to fit shell models to the net emission-line profiles, investigating different choices for the continuum level (and hence the line equivalent width).
In Sec. 4, we discuss the H i column
density distribution in EoR analogs, the size scale of the holes leaking LyC radiation, and argue that
aperture vignetting biases shell model properties in the directions required to solve the discrepancies
with independently measured outflow properties.
Throughout this paper, we adopt a Flat ΛCDM cosmology with Ω_m=0.3, Ω_Λ=0.7, and H_0=70
km s^-1 Mpc^-1. We also adopt the Spearman rank method to quantify the correlation strengths r.
The data used in this paper is available via the CLASSY high-level science products (HLSP)
homepage[
Data will appear at <https://archive.stsci.edu/hlsp/classy> after acceptance by the ApJ. The data product
can be found here (<https://drive.google.com/drive/folders/1NCUyr1vQ10z4BZuGBqsBuIjL0dWJnmZ1?usp=sharing>) during the review
period.], including the best-fit DLA systems, the emission lines
after subtracting the DLA and continuum, and the best-fit shell model spectra.
§ SAMPLE OF PROFILES
Here we present high-S/N spectra for the 45 CLASSY targets. Each of these nearby galaxies has a compact,
far-UV bright star-forming region which was the target of the COS observation. The sample provides a diverse
set of local analogs of high-redshift galaxies, including both Green Pea galaxies and LBAs.
Physical conditions in the starburst range cover oxygen abundances from 12+log(O/H)∼ 7 to 8.8 and
electron densities from n_e ∼ 10 to 1120 cm^-3. The stellar masses and star formation rates of their host
galaxies sample the range log (M_⋆/M_⊙)∼ 6.2 to 10.1 and
log (SFR/M_⊙ yr^-1) ∼ -2 – 1.6, respectively <cit.>.
The raw spectroscopic data were reduced using the CALCOS pipeline (v3.3.10), including spectrum extraction,
wavelength calibration, and vignetting correction, and then coadded using a custom pipeline <cit.>.
The Galactic foreground extinction was corrected assuming a ratio of total-to-selective extinction
R_V=3.1 and a Milky Way (MW) extinction curve <cit.>.
Fig. <ref> shows an overview of the G130M and G160M spectra, ordered by redshift. The CLASSY spectra
easily resolve the damping wings of the broad absorption trough imprinted by H i absorption from the Milky Way.
A large fraction of the spectra show a second damped absorber at the redshift of the CLASSY galaxy. In the
lowest redshift galaxies, the blueshifted damping wing of the target is blended with the redshifted damping wing
of the Milky Way absorption.
The yellow waterfall across Fig. <ref> highlights the redshifted emission.
Surprisingly, the emission is frequently detected in the bottom of a damped absorption trough. Profiles of this type cannot be produced by a uniform shell of neutral hydrogen.
In this paper, we adopt an approach that we have not seen used previously. We fit the damping
wing profile, including a non-unity covering factor. We then extract the net emission-line profile
relative to the damping trough, as others have done. The equivalent width of this net emission, however,
has been previously neglected. We address this in Sec. <ref> below, where we demonstrate that the best normalization
for the emission is the fraction of the stellar continuum not intercepted by the high column density neutral hydrogen.
We use physical models to define the continuum level near , allowing us to accurately
model the DLA system in Sec. <ref>.
CLASSY provides two models for the continuum (Senchyna et al. in preparation).
Both models assume the observed continuum can be reconstructed as a
linear combination of a set of single-age, single-metallicity stellar populations <cit.>,
and, thus, be fitted using the following relation:
F_obs (λ) = 10^-0.4E(B-V)k(λ) Σ_i X_i M_i(λ),
where F_obs (λ) is the observed spectrum, k(λ) is the attenuation law,
M_i(λ) is the spectrum of the ith single stellar population (SSP), and X_i is its coefficient.
The main difference between the two methods is the stellar population synthesis framework. The top panels of
Fig. <ref> illustrate each best-fit continuum. The red dashed line represents the continuum built from
STARBURST 99 synthesis models <cit.> and a <cit.> attenuation law, and the green dashed line uses
the latest version of the <cit.> model <cit.> and an
SMC extinction law <cit.>. These two continua both reproduce the prominent N v λ1240
stellar P-Cygni line well. The narrow dip visible at in both models is not physical (C. Leitherer, private
communication), and we interpolate over it. We fit the DLA profiles using both the continuum models and found
similar parameters. We adopt the first method for the analysis that follows because the STARBURST99 models
have the higher spectral resolution.
§.§ DLA fitting
Fig. <ref> illustrates the diversity of CLASSY profiles: pure DLA systems, emission in the
bottom of a damping trough (hereafter Abs+EM profile), P-Cygni-like profiles, and double-peaked emission.
A large fraction of CLASSY spectra (31/45) have a DLA system, and 20 out of these 31 galaxies
show double- or single-peaked emission lines. CLASSY spectra offer the high spectral resolution
and S/N ratio required to remove the contribution of the DLA system <cit.> and
extract the emission lines.
Fig. <ref> shows that geocoronal emission lines intersect the broad damping wings at low redshift
and at z ≈ 0.07. In addition, the LIS absorption lines from the MW and the target galaxy affect
the wings of the DLA systems but intersect only a few emission lines (see Sec. <ref>).
We mask these lines as indicated in the second row of Fig. <ref>.
To uniquely describe the MW DLA system, we adopt the Galactic H i column density derived from 21 cm emission
in the direction of the target <cit.>. The DLA line profile is described by a Voigt profile
<cit.>, which is defined by a Doppler parameter (b) and a column density (N_HI).
We assume a Doppler parameter of 30 km s^-1 <cit.>, no velocity shift, and complete
covering of the continuum source. These steps define the Voigt profile, which we convolve with the instrumental
resolution, and then subtract from the normalized continuum to uncover the profile of the CLASSY target.
Because DLA systems are optically thick, the bottom of the Voigt profile is completely dark. However, we found
significant residual intensity in the bottom of the damping troughs. Partial covering of the continuum source
therefore turns out to be critical for fitting damping profiles. This partial covering was sometimes subtle,
as in the top left panel (J1129+2034) of Fig. <ref>. In contrast, the top right panel (J1418+2102) of
Fig. <ref> shows strong H i damping wings and prominent residual intensity in the trough. Here we
adopt a modified Voigt profile which allows a velocity offset, v, and a velocity-independent covering fraction, f_C.
We convolve each Voigt profile with a Gaussian line spread function whose width is determined by the spectral
resolution <cit.>. Our fitting code then multiplies the normalized continuum by the optical depth
of each Voigt profile. The error is measured using a Monte Carlo (MC) approach; we add random noise to the observed
spectra and refit it 1000 times. Leaving all the parameters free provided statistically good fits; however,
we noticed degeneracies between the fitted velocity v of the DLA and the wings of the damping profile, and
also the overlaps between the wings of the DLA system and emission.
We broke these degeneracies by using
the O i λ1302.2 Å absorption line to constrain the parameters of the Voigt profile, an approach
Section <ref> justifies below. The profile shape is not sensitive to b, and we fixed b to be 30 km s^-1.
The second row of Fig. <ref> presents the continuum-normalized spectra, our model for the MW absorption,
and the fitted damped absorption. Table <ref> summarizes the best-fit Voigt parameters for the DLAs.
We extract the emission lines by subtracting the stellar continuum and DLA profile.
Previous works have visually selected a local continuum close to the emission. A comparison of common targets
shows that the resulting can be sensitive to the wavelength range used to define the local continuum.
For example, the beginning of the wavelength range of J0938+5428 used in <cit.> is the blue
peak of J0938+5428 in Fig. <ref>. For this same target, <cit.> determine the wavelength range from the
intersection of the emission line with the DLA profile. This method recovers the blue peak; however,
it underestimates the flux because the bottom of the DLA system is poorly estimated.
Among 45 CLASSY galaxies, 24 galaxies show significant double-peak emission lines, 10 show single-peak emission.
Fig. <ref> presents the emission line spectra of 34 CLASSY galaxies. The remaining 11 galaxies show pure
DLA systems, and are therefore not included in Fig. <ref>.
§.§.§ Constraining the DLA Properties with O i Absorption
We use the narrow O i absorption lines to constrain the velocity of the DLA. Since the ionization potentials of O and H
are very similar, we expect the O i to trace H i gas in the DLA absorber. Fig. <ref> validates this expectation;
the DLA systems in CLASSY always associate with strong O i absorption. The only O i absorber without a DLA is J1112+5503, which shows a P-Cygni profile still suggesting substantial H i gas.
For optically thick O i absorption, the residual flux at the bottom of the fitted Gaussian profile
determines the covering fraction of O i gas.
The O i optical depth can be measured following
τ = 0.318( N_OI/10^14 cm^-2) (30 km s^-1/b),
where N_OI is the O i column density and b the Doppler parameter <cit.>.
Since the H i column densities of DLAs in the CLASSY sample are > 10^20 cm^-2, and the metallicities
12+log (O/H) are >7.5, we find that the O i optical depths are >10, and the line is saturated.
We acknowledge that this argument relies on the assumption that O i is uniformly distributed in the neutral gas.
If the intervening O i clouds have different velocities, the covering fraction derived from O i
absorption would place a lower limit on the covering fraction of neutral gas <cit.>.
Fig. <ref> shows that the O i covering fraction is approximately equal to the covering fraction of the
DLA system. Table <ref> collects the best-fit velocities and covering fractions for the DLA and O i absorption.
§.§.§ Notes on individual galaxies
* The bottom of the DLA system is hidden under the emissions in
J0938+5428, J1024+0524, J1416+1223, and J1521+0759 in Fig. <ref>,
so the residual flux in the damping trough is not directly constrained.
Since the blue wing of the damping profile is contaminated by
metal absorption lines, the shape of the damping profile is poorly constrained.
Therefore, for these four galaxies, we adopt the O i covering
fractions to be their DLA covering fractions.
* The covering fractions of four galaxies (J0337-0502,
J0405-3648, J1132+1411, J1448-0110) are fixed to be a constant measured visually but also in agreement with
their O i covering fractions. The Voigt profile fit for these four galaxies underestimates the covering fraction because
the CLASSY error spectra do not account for the small counts at the trough bottom which produce an
asymmetric error <cit.>.
* The DLA systems of three galaxies (J0127-0619, J1044+0353, J1359+5726) were not fitted well by a single Voigt profile, and
we noticed that their O i absorption lines show a second component. Thus, we adopted two Voigt profiles and matched
their velocities and covering factors to those of the O i components.
* In J1105+4444, the peak separation is exceptionally broad, ∼1000 .
We suggest that the peaks are likely emitted by different regions within the COS aperture.
To test this conjecture, we inspected HST/COS NUV acquisition image <cit.>.
We found that J1105+4444 is not only an elongated object with multiple clumps, but the major axis
of these clumps is along the dispersion direction of the COS observation. Thus, their spatial offset
in the aperture may cause an apparent velocity shift which is not physical.
This object is excluded in the following analysis.
For completeness, we note that the DLA fit for J1105+4444 failed when constrained by two O i components,
and we used a double-Voigt profile with free velocities.
* The blue part of the J1525+0757 line is likely a P-Cygni profile, so the impact of the geocoronal O i
emission should be negligible.
* We also exclude J1448-0110 from the emission analysis
due to the low S/N.
§.§ DLA system and Aperture Loss
The fraction of emitted photons captured by the 25 diameter COS aperture will vary dramatically among the targets because of their large range of distances. For a typical target, the physical diameter of the aperture is roughly 700 pc, which is larger than the half-light radius of the UV continuum emission core but smaller than the Strömgren radius of the nebula.[To estimate the volume ionized by the stars within the COS aperture we have used the extinction corrected the Hα luminosity in the SDSS fiber and assumed a
volume-average electron density of 1 cm^-3 and case B recombination at 10^4 K.]
The most distant CLASSY targets are LBAs at z ≈ 0.18. Here, the COS aperture subtends
nearly 8 kpc, vignetting extended halo emission but likely capturing most of the luminosity.
CLASSY also includes several very nearby galaxies, where the COS aperture subtends just a few hundred parsecs,
and damped absorption troughs are prominent in their spectra (see Fig. <ref> and <ref>).
We suggest that the DLA detections indicate the emission is scattered outside the COS aperture.
In support of this claim, Fig. <ref> shows that 14 of 16 galaxies with UV half-light radii larger
than the COS aperture <cit.> have a DLA in their COS spectrum[The sizes of compact galaxies with
r_50<04 are measured using COS acquisition images and the sizes of extended galaxies are measured
using SDSS u-band images.]. The frequency of DLA detections is reduced among the galaxies with half-light radii
smaller than the COS aperture. The Lyman Break Analog sample has the fewest DLA detections.
Although the physical size of the aperture grows with increasing redshift,
we do not find a one-to-one correlation between DLA detections and redshift.
For z>0.1 (yellow circles), the physical scale of COS aperture reaches ∼5 – 10 kpc and is larger than
the UV sizes of those galaxies; however, a large fraction (4/9) of their spectra still show significant DLA system.
The spectra of high-redshift galaxies observed with similar aperture size sometimes show DLA systems
as well <cit.>. For example, <cit.> reveals a similar fraction
(40/92) of galaxies at redshift ∼2.2 – 3.2 which shows the DLA system.
We can gain some quantitative insight from the imaging studies of <cit.>.
LARS 9 and LARS 14 correspond to the CLASSY galaxies J0823+2806 at z=0.04722 and J0926+4427 at z=0.18067,
respectively. The closer
galaxy shows a pure-DLA profile, whereas the more distant one has a double-peaked emission profile with no DLA.
Inspection of Figure 1 in <cit.> shows the emission comes from a shell around J0823+2806, whereas the
emission from J0926+4427 is centrally concentrated. In the latter example, the COS aperture includes roughly
60% of the total flux <cit.>, showing that the luminosity is significantly attenuated
even in the case of no absorption. For the DLA, the growth curve shows net emission only when the aperture is enlarged
to a diameter of 9.5 kpc, about four times larger than the COS aperture. Galaxy-by-galaxy aperture corrections for
are not currently available, but these examples support our conjecture that the galaxies showing damped
absorption would show net emission in spectra obtained through larger apertures.
§.§ Measurements
In this section, we present the measurements of the emission properties. We measure the continuum and
DLA-subtracted profiles. We will demonstrate in Sec. <ref> that the emission emerges from
holes between the DLA clouds. Since these parts of the line profile have different origins, they must be
separated to obtain a meaningful analysis.
§.§.§ Kinematics
We measure the blue peak velocity, v^Lyα_blue, and red peak velocity,
v^Lyα_red, as the position of the local maximum in the emission line at
velocity v <0 and v > 0, respectively, relative to the systemic velocity. The minimum between the two
emission peaks defines the trough velocity, v^Lyα_trough. We define the
peak separation as Δ v_Lyα = v^Lyα_red - v^Lyα_blue.
§.§.§ Fluxes and Escape Fraction
For double-peaked profiles, we measure the fluxes of the blue and red components
by integrating to the velocity of the trough between the components.
We also measure the asymmetry parameter of the red peak of emission, defined as A_f = (∫^∞_λ^red_peak f_λ d λ)/(∫^λ^red_peak_λ_trough f_λ d λ), where λ^red_peak is the wavelength of red peak and λ_trough the wavelength of the trough <cit.>.
The total fluxes are measured by integrating flux between the wavelengths where the profile meets zero flux,
including the central dip in double peaked profiles and the negative flux in P Cygni profiles.
We convert the total fluxes to luminosity using luminosity distance from Table
<ref>, which is corrected for the cosmic flow.
The rest-frame equivalent widths, EWs, are computed using the spectra and the total stellar continuum,
EW(Lyα) = ∫ F_Lyα(λ)/F_cont(λ) d λ / (1+z).
We estimate the escape fractions f^Lyα_esc based on intrinsic fluxes inferred
through dust-corrected Hα (or Hβ) fluxes assuming a Case-B recombination <cit.>:
f^Lyα_esc = F_Lyα/(8.7 × F_Hα)[We adopt the factor of
8.7 to be consistent with previous works. It corresponds to a temperature of 10,000 K and an electron density of
∼300 cm^-3. ]. <cit.> have measured the Hα and Hβ fluxes using optical spectra from
SDSS, MUSE, KCWI, MMT, and VIMOS. Since the UV spectra and optical spectra are obtained via different instruments with
different aperture sizes, a scaling factor between UV spectra and optical spectra is needed to correct the different
aperture losses. <cit.> measured the scaling factor by matching the optical spectra to the extrapolation
of the best-fit UV stellar continuum model (see their Appendix A). The scaling factors for most objects approximate the
ratio between apertures of different instruments but are not exactly the same because some other effects may also cause
the flux offsets such as the vignetting. For example, the median of the scaling factor for SDSS spectra is ∼0.79
and the aperture size ratio is (25)^2/(3)^2∼ 0.69. We refer readers to <cit.> for more
details. In this work, we adopt the corrected Hα fluxes. Since the Hα for J0934+5514 and J1253-0312 are
unavailable, we convert their Hβ fluxes to Hα fluxes using a factor of 2.86, by assuming Case-B
recombination with a temperature of 10,000 K and electron density of 100 cm^-3.
§.§.§ trough flux density
The flux density at the trough velocity defines the trough flux density, f_trough. The f_trough of J0926+4427 and J1429+0643 have also been measured in <cit.> based on the spectra obtained by
HST/COS G140L with a resolution of 1,500. <cit.> measured the trough flux density based on the
continuum-unsubtracted spectra but our measurements are based on the continuum-subtracted spectrum. Thus, our
F_trough/F_cont should be lower by 1 than those in <cit.>.
Here F_cont is the flux density of total stellar continuum estimated from STARBURST99.
However, accounting for this difference, the F_trough/F_cont of J0926+4427 and J1429+0643 in our measurements are still lower.
Particularly for J0926+4427, we do not see the net residual trough flux density (i.e., F_trough<0).
This is because the CLASSY spectra have much higher resolution, ranging from ∼ 2,200 to 15,000 with a median of
5,000 <cit.>. The high-resolution spectra resolve the small structures at
the central trough, which were smoothed due to the lower resolution in <cit.>. We note that the resolution
around emission line might be lower than the resolution for the continuum as emission often subtends a
larger solid angle than the continuum.
§.§.§ Aperture Effects on Measurements
The COS aperture, therefore, attenuates the emission relatively more than the UV emission due
to the scattering of photons. Thus, even though Hα, and the UV continuum are measured
locally in the same aperture, we expect and the EW to be underestimated. In our example of
J0823+2806, see discussion in Sec. 2.2, the attenuation is severe because most of the emission
is scattered outside the COS aperture. If scattering outside the COS aperture produces the large fraction
of DLA systems in CLASSY, then the and EW of these galaxies are significantly underestimated.
A more subtle bias that we will examine is the possibility that this vignetting modifies the shape of
the emission-line profile. <cit.> predicted that the blue-to-red peak ratio (hereafter B/R ratio)
would increase with increasing impact parameters because the front-scattered photons (blue peak) are closer
to the resonance center of the outflowing gas and thus, tend to be scattered to larger impact parameters, compared
with the back-scattered photons (red peak). Integral field spectroscopy confirms this trend in a few halos <cit.>. Another possible interpretation is that the average projected outflow velocity
decreases with the increasing radius <cit.>.
§ RADIATIVE TRANSFER MODELING
The high fraction of DLA systems in CLASSY was not anticipated. More surprising,
however, was the discovery of double-peaked emission in the bottom
of the broad absorption profiles. We have drawn attention to an important property of
these DLA systems; the high-column density gas only partially covers the continuum
source (see Sec. <ref>). The residual intensity in the continuum-normalized spectra indicates the
uncovered fraction of the continuum emission (within the COS aperture).
In this section, we explore what continuum is linked to the net emission profile, the total continuum or the uncovered fraction.
Specifically, we utilize the shell model to fit the profiles which are normalized by the STARBURST99 continuum or the DLA continuum (hereafter normalized profile[ To avoid confusion, we define the emergent profile as the observed emission line with the underlying continuum and DLA, the net profile as the profile after removing the underlying continuum and DLA, and the normalized profile as the net profile after being normalized by the underlying continuum.
]).
The model line profile is computed using the Monte Carlo radiative
transfer code <cit.>.
This technique has been used to successfully reproduce the observed profiles of
emission lines <cit.>.
The shell model can produce emission when the dust optical depth
is low, or a DLA system when there is a substantial neutral hydrogen column
with a moderate dust optical depth <cit.>.
However, the homogeneous shell model cannot produce a emission line in the DLA
trough, i.e., the Abs+Em profiles seen in our CLASSY sample (see Fig .<ref>).
The emission line requires low-N_HI channels (with low
dust optical depth), which contradicts the presence of damped absorption
which requires very high column density. This requires a non-uniform shell
model to describe the multi-component ISM. Although radiative transfer
through clumpy media has been explored <cit.>,
a non-uniform shell is beyond the scope of this work.
Here, we adopt an alternative method to fit the profiles of the CLASSY sample.
We fit and remove the DLA system to extract the net profile, as described in Sec. <ref>,
and then we fit the model to the normalized profile using two different approaches described in Sec. <ref>.
The variant fitting results could reveal the physical links between the gasses probed by emission and DLA absorption, as discussed in Sec. <ref>.
Some properties of the shell model have been mapped to those of more realistic
outflows <cit.>. However, the shell-model parameters are found to
have systematic discrepancies with independently measured outflow
velocities and the velocity dispersion of the intrinsic line profile <cit.>.
To understand the origin of the discrepancies, we perform more fittings with constrained redshift priors and compare it with the previous results in Sec. <ref>.
§.§ Shell Model
computes resonant scattering through a uniform, expanding shell,
which is composed of dust and neutral hydrogen gas.
The shell model used in has 6 free parameters, including 2 parameters
for the central radiation source: intrinsic line width σ_i and intrinsic
equivalent width EW_i, and 4 parameters for the expanding shell: neutral
hydrogen column density N_Hi, dust optical depth τ_d,
shell velocity v_exp, and temperature T.
In addition to these six parameters, a redshift parameter z_tlac is
also applied to shift the rest-frame of the profile relative to the systemic redshift of the galaxy.
The photons and underlying continuum photons are generated from the
central source with an intrinsic width of σ_i and intrinsic equivalent
width of EW_i. The photon is then emitted into the H i shell with
a random direction and travels a distance before being absorbed or resonantly scattered.
The distance that a photon can travel is calculated using the total optical
depth of dust τ_d and neutral hydrogen N_Hi
in the expanding shell with velocity v_exp and temperature T.
The probability that a photon is resonantly scattered or absorbed at a
specific position is estimated by comparing the optical depth of neutral
hydrogen with the total optical depth at that position. If the photon is
resonantly scattered, a new direction and a new frequency are drawn from
the proper phase function and the frequency redistribution function, respectively.
The previous steps are repeated until the photon escapes from the simulation
domain or is absorbed by the dust. If the photons escape from the simulation
domain, their frequency, and other properties are recorded. This simulation
has been run thousands of times over a discrete grid of (v_exp,
N_Hi, T) and then been post-processed with a continuous
grid of (σ_i, τ_d, EW_i) to generate the simulated
spectra for different parameter values.
To fit the observed spectrum, a likelihood function is constructed
based on the noise and flux spectra. The best-fit spectrum is derived by
maximizing the likelihood function using the Markov Chain Monte Carlo (MCMC)
and nonlinear optimization methods.
We highlight the importance of the intrinsic equivalent width (EW_i) in the shell model,
a parameter excluded by studies that fit the continuum-subtracted line
profiles, because the continuum photons are also involved in resonant scattering and
can dominate the normalized profile for low-EW_i cases.
§.§ Profile fitting
Our profile-fitting approach draws attention to ambiguity about the appropriate continuum level for normalization.
When a DLA system is present in the spectrum, the underlying continuum
could be the total stellar continuum (red lines in panel a of Fig. <ref>),
and thus, the normalized spectrum is:
I^EW_λ = (f^Lyα_λ + f^cont_λ) / (f^cont_λ),
or the residual stellar continuum in the bottom of DLA system (red line in panel b) and thus, the normalized spectrum is:
I^EW_λ = (f^Lyα_λ + f^cont_λ× (1-f_C)) / (f^cont_λ× (1-f_C)),
where f^Lyα_λ is the emission line, f^cont_λ
the best-fit total stellar continuum (see Sec. <ref>), f_C the covering fraction of DLA system.
The choice of the underlying continuum will change the equivalent width
of the line and, thus, the contribution of continuum photons on the emergent profile.
Here we perform profile fittings, assuming each continuum level in turn, and then discuss the results.
We present the best-fit spectra in Append. <ref>.
We also present the best-fit model parameters of the second profile fitting in Table. <ref>.
The fitted parameters somewhat degenerate with each other.
For example, in the case of outflowing shells, <cit.> demonstrate
that various combinations of shell velocity, column density, temperature,
and redshift can produce very similar line profiles, for example, (v_exp,
log N_HI, log T, 0), and ∼ (2v_exp, log
N_HI-0.5 dex, log T+1 dex, Δ v). Nonetheless, the spectra
generated by these parameters show clear differences at the red peak and
our high-S/N spectra should be able to distinguish between the degeneracies.
§.§.§ First attempt: total stellar continuum
Overall, the quality of the first fitting using the total stellar continuum (Eq. <ref>) is quite good, given the simplicity of the model.
However, in a subset of spectra, the results are unsatisfactory,
especially J0938+5428, J0944+3442, J1044+0353, J1119+5130, J1144+4012, J1416+1223, J1521+0759, as presented in Fig. <ref>, of which the best-fit spectra show a very sharp dip around zero velocity compared to the observed profile.
Looking at their original spectra (see. Fig. <ref>), we find that all these poorly fitted profiles correspond to spectra that show significant DLA systems compared with the successful sample.
This result motivated us to investigate whether the sharp dips might be caused by an inappropriate underlying continuum, which underestimated the EW spectra I^EW_λ.
Thus, we performed a second profile fitting using the residual stellar continuum as described by Eq. <ref>.
§.§.§ Second attempt: residual stellar continuum
In Fig. <ref>, we present the best-fit models for profiles normalized by the residual stellar continua (1-f_C).
For the unsatisfactory sample in the first attempt, normalizing the spectra by the residual stellar continuum significantly improved the best-fit results. The sharp dips seen in the models of Sec. <ref> no longer exist in the new model spectra.
In Fig. <ref>, we compare the reduced χ^2 for the first and second attempts. Clearly, most results are significantly improved if adopting the normalization of the residual stellar continuum.
Thus, we can conclude that the dip was caused by an inappropriate continuum level.
In further analysis, the first attempt of fitting will not be considered.
For the galaxies without DLA systems, f_C = 0, the residual continuum rises to the level of the total continuum. It is therefore not surprising that every profile is successfully fitted when the residual continuum is used.
We conclude that the residual continuum, 1 - f_C, is the more physical normalization for the emergent emission line.
In other words, the DLA covering fraction f_C gives a good indication of the fraction of the intrinsic emission that is blocked by the high-column density clouds.
§.§.§ Implication: Scattering Outside COS Aperture Reveals Low-N_HI Channels
We have shown that successful radiative transfer modeling of CLASSY
spectra, in the context of the shell model, requires: (1) separating the
emission profile from the DLA system, and (2) normalizing the
emission by the leaked continuum, i.e. the residual flux in the DLA system.
This approach divides the COS aperture into two groups of sightlines, hereafter
channels, distinguished by their column density. In the schematic picture of a
thin shell, these two channels represent clouds and the intercloud medium. More
generally, for the targets with C_f > 0, the photons entering the
high-N_HI channel do not emerge from the galaxy at radii within the
COS aperture. If they did, then the best continuum normalization would be the
total stellar continuum, which is inconsistent with our fitting results.
The photons entering the high-N_HI channel must be scattered
to radii larger than the COS aperture before they escape. The alternative
is that they are absorbed by dust grains which seems less likely for two reasons.
Most CLASSY galaxies whose COS spectra detect DLA systems have low metallicities
and are relatively dust poor. In addition, substantial amounts of dust in the
scattering clouds would boost the transmitted equivalent width <cit.>,
but we do not measure unusually large EW.
When the emission is separated from the DLA system, what do the shell model parameters
fitted to the emission component represent? Perhaps the line photons entering low-N_HI channels
scatter off both the low N_HI clouds and the walls of the DLA channels. In the limit
of no intercloud medium, the kinematics of the dense clouds would determine the shape of the
line profile <cit.>, so we might expect the kinematics of both the
low- and high-N_HI channels to impact the profile. If photons entering
DLA sightlines are scattered outside the spectroscopic aperture, then vignetted apertures may
have one advantage, namely providing a direct view of the properties in low-N_HI
channels.
§.§ Discrepancies between the Shell Model and Observations
We have presented that whether the shell model can well-fit the observed profile is critical to infer the ISM properties.
However, the three discrepancies reported in <cit.> (see also Sec. <ref>) might suggest a limited physical meaning of the model parameters.
These discrepancies are also observed in the CLASSY sample with high significance, as shown in Fig. <ref> (black circles).
The best-fit redshifts are always larger by 0 – 200 than the spectroscopic redshift, consistent with <cit.>.
One possible origin of the discrepancies is the degeneracies between the model parameters, suggested in <cit.>.
To test this scenario and gain more insight into the discrepancies, we perform a third profile fitting following <cit.> which constrains the range of redshift parameters to break the degeneracies.
§.§.§ Third attempt: constraining the redshift
The CLASSY redshifts derived from UV nebular lines agree well with those derived from optical lines;
the standard deviation of velocity difference is ∼22 km s^-1 <cit.>. A spatial offset between the scattered
emission and the optically-thin emission lines would introduce an additional redshift error if, and only if, the offset were along
the dispersion axis of the spectrograph. Based on the radius of the unvignetted aperture (04), non-perfect alignment
could shift the wavelength scale by as much as ± 44 km s^-1. For a redshift-constrained fit, we adopted a narrow Tophat
probability distribution of width ± 44 km s^-1 as the prior on redshift.
The best-fit spectra are presented in Fig. <ref>.
In contrast, in the second attempt at profile fitting (Sec. <ref>),
we adopted a Gaussian prior on redshift, and this broad distribution with σ(z_tlac) = 120 serves as the unconstrained fit.
§.§.§ Can constrained fitting alleviate the discrepancies?
The redshift differences are apparently improved when adopting the constrained fitting (red circles in Fig. <ref>).
This is because the constrained redshift prior sets a hard limit of the difference to be 44 .
On the other hand, comparing the best-fit profiles of constrained fitting (see Fig. <ref>) with those of unconstrained fitting (see Fig. <ref>), it is hard to distinguish the difference between them by visual inspection.
We compare the reduced χ^2 of two profile fittings in Fig. <ref>, which shows that the results of constrained profile fitting are slightly worse than those of unconstrained profile fitting, but still acceptable[
We notice two best-fit spectra (J1112+5503, J1323-0132) of constrained profile fitting are improved compared with the unconstrained profile fitting. This might be because the unconstrained profile fitting for these two objects is trapped in a local maximum of the likelihood.].
Thus, our test confirms that adopting a constrained redshift prior for profile fitting can somewhat alleviate the redshift discrepancy
observed in previous works <cit.>.
We present the best-fit parameters of the third profile fitting in Table <ref>.
However, the best-fit redshift remains systematically larger than the spectroscopic redshift as most of the red circles are still below zero velocity. This indicates that the constrained fitting does not fully resolve the observed discrepancies.
We return to this topic in Sec. <ref>, where we combine the comparison between the shell velocity and spectral measurements of outflow velocity.
We do not discuss the line width discrepancy in this work because a clumpy model is needed to resolve this discrepancy.
By comparing the profiles generated by the uniform shell model and a clumpy model, <cit.> and <cit.> find that a larger line width is always required for the shell model to produce a similar profile as the clumpy model. The intrinsic difference between the two models is that the clumpy model includes the turbulent velocity dispersion of the clumps while the shell model does not.
Thus, the line width of the shell model needs to be artificially broadened to compensate for the omission of turbulent motion in the shell model.
§ PROPERTIES OF THE NEUTRAL ISM
In this section, we discuss the relationship between the H i column
densities inferred from the absorption and emission components of
the line profile. We then discuss indirect evidence LyC leakage. Finally, we
return to the problem of why the shell model systematically mispresents outflow
properties, finding that the problem lies in the spectroscopic aperture.
§.§ Structure of the ISM in CLASSY galaxies
In Sec. <ref>, we found evidence that the neutral ISM consists
of several components with different column densities. The DLA system requires
high-N_HI clouds with N_HI>10^20 cm^-2. In
Sec. <ref>, the fitting revealed that the observed
emission line requires low-N_HI holes with 10^18<N_HI<10^20 cm^-2.
Combining these two results demonstrates the existence of sightlines with different H
i column densities in individual galaxies. We have argued that
scattering of a significant fraction of the photons out of the COS aperture
makes the high-N_HI channels visible via absorption, whereas their
damping profiles would be filled in by scattered emission in spectra obtained through
larger apertures. Apparently, the halos of many CLASSY galaxies are much larger than
the COS aperture, and the scattering of photons out of the COS aperture
provides a unique opportunity to describe the structure of the neutral ISM, as we
show here.
New insight into how LyC radiation escapes from local analogs of EoR galaxies
may be obtained by comparing the structure of the ISM in hydrodynamical simulations
to the column density distribution we derive. Feedback from massive stars is widely
believed to shape the pathways for LyC escape, but the mechanism is debated.
For example, <cit.> argue that positive feedback, essentially propagating
star formation triggered by the mechanical feedback from massive stars, is essential
to shift the production of LyC radiation away from the densest region of a galaxy.
In contrast, in H ii regions too young to have produced supernova explosions,
turbulence driven by ionization fronts may open channels for LyC escape <cit.>.
One difference between these two mechanisms is the size of the channels. Whereas the channels
opened by turbulence are individually small, the low-N_HI bubbles driven by mechanical
feedback have scales reaching hundreds of pc <cit.>. Thus, the size of the channels provides
insight of particular interest for understanding the escape pathways.
In this section, we adopt the column density estimation from the third profile fitting (see Sec. <ref>), because it incorporates more constraints from the observation.
However, adopting the estimation from the second profile fitting does not change the conclusion of this section.
§.§.§ Column Density Distribution of Neutral ISM
Fig. <ref> compares the distribution of low-N_HI
channels returned by the shell model fits and the high-N_HI
column densities measured from the damping wing absorption.
In the top panel, the histograms are normalized by the total number of
galaxies showing a DLA system or emission line, respectively.
Their combined distribution has two peaks: one at N_HI≈ 10^19 cm^-2 which represents the path of the escaping
photons[If adopting the N_HI from the second profile fitting, the peak shifts to lower by 0.4 dex.]
, and a second peak representing the typical DLA system
at N_HI≈ 10^21 cm^-2. We recognize that the
DLA sightlines and the pathways of the scattered emission
select specific channels through a turbulent, multiphase ISM.
Nonetheless, their combined distribution may represent a large fraction
of all sightlines because we found that these components cover complementary
fractions of the UV continuum (see Sections <ref> and <ref>).
We weight the column densities by the covering fraction of each system
in the middle panel of Fig. <ref>. This normalization indicates
how many sightlines are covered by the low-N_HI or high-N_HI
paths. After accounting for the covering fraction, the peak of the distribution
of low-N_HI paths shifts to lower column density;
the lower column densities have higher weights, i.e., larger covering fraction
of low-N_HI paths. In other words, the galaxies with only
emission line observed have lower column densities compared to those showing
emission in the bottom of DLA system.
In the bottom panel, we present the combined distribution of column densities
in galaxies with only emission, only DLA system, or emission
in the bottom of DLA system. Similar to the middle panel, the distributions
are weighted by the covering fraction. Clearly, the column densities increase
with the presence of DLA system, consistent with the middle panel. Overall,
however, the distribution remains bimodal, consistent with the argument that
the distribution includes a large fraction of all sightlines. At a qualitative level, the bimodal
distributions in Figure <ref> confirm a structural similarity between
the ISM in CLASSY galaxies and the ISM in hydrodynamical simulations focusing on
the star – gas interplay <cit.>. In detail, however, we
recognize several quantitative differences.
§.§.§ Column Density Distribution in Simulations
In the H ii region simulations of <cit.>, turbulence driven by
ionization fronts creates a bimodal distribution of column densities. In their Figure 6,
the higher column-density peak covers N_HI values similar to our Figure <ref>.
The simulated column density distribution actually reaches a minimum around 10^19 cm^-2,
however, right where where Figure <ref> shows a maximum. The lower column-density peak
is offset to 10^17 cm^-2 in the simulated distribution. These simulations
zoom in on individual H ii region, and it is possible that placing
the H ii region in a more realistic galactic environment would shift the distribution.
Comparing the histogram in Fig. <ref>
to those Fig. 11 of <cit.>, we find the high-N_HI
gas spread over a similar range in column density. In those simulations,
the fraction of high-N_HI is sensitive to galaxy mass;
for their 10^7 – 10^8 M_⊙ sample, the fraction of sightlines
with high-N_HI to the total H i sightlines is about
one-third as large seen in Fig. <ref>. Since their histograms exclude
the gas within 0.2 R_vir of the starburst, it is possible that the addition
of the starburst region would eliminate, or at least mitigate, the discrepancy.
Another difference is the column density of the lower-density peak.
This peak is seen at N ≈ 18-20 cm^2 in CLASSY, whereas <cit.> find
the low-N_HI channels spread, primarily, over the N ≈ 16-18 cm^2 range.
This result may indicate that the feedback in <cit.> is too efficient and removes
too much neutral hydrogen.
Integral-field spectroscopy is clearly needed to address two observational biases.
The histograms in Figure <ref> combine measurements made on different physical
scales because the physical size of the aperture changes with galaxy distance. It is not
fully understood how the aperture affects the column density derived by shell-model fitting.
In addition, we emphasize that the lowest colum density sightlines may be missing from
Figure <ref>. The shell model returns a column density that represents the
total column of clouds plus an intercloud medium <cit.>; it follows that the
lowest (and highest) column density sightlines may not be represented in Figure <ref>.
The low-N_HI channels may therefore include lower column-density
pathways, and we aim to understand whether CLASSY galaxies have sightlines optically thin
to LyC radiation.
§.§ Pathways for LyC Leakage
In this section, we will investigate the LyC-thin sightlines[Column densities lower than 10^18 cm^2 corresponds to LyC escape fractions
≳1% <cit.>.] with N_HI<10^18 cm^-2 in CLASSY sample by analyzing the positive
residual trough fluxes and small .
§.§.§ Peak Separation, Trough Flux, and Red Asymmetry
Peak separation is a good, empirical tracer of LyC escape <cit.>, and
the shell model provides a theoretical basis for this relation <cit.>.
In galaxies where there are few holes through which LyC can escape (low LyC leakage), the scattered photons traverse
optically thick channels, leading to a broad peak separation. Whereas in galaxies with
high LyC leakage, the density-bounded channels result in a small peak separation.
However, the peak separation does not distinguish how the photons escape <cit.>, as many small holes in
a turbulent medium can produce a narrow peak separation just like a large, wind-blown cavity.
The asymmetry parameter A_f help to quantify the multiphase nature of the turbulent H ii regions.
It is originally introduced by <cit.> to measure the attenuation imprinted by intergalactic medium at high redshift. Here, we apply it in a different context recently introduced by <cit.>.
The two dominant types of escape (single flight or excursion) tend to produce a symmetric line.
Thus, when the medium is dominated either by ionization- or density-bounded channels as in the blue or gray region in Fig. <ref>, the asymmetry of the emergent line is low.
However, when the two channels coexist as in the red region in Fig. <ref>, the asymmetry is high.
In Fig. <ref>, we plot peak separation against red peak asymmetry.
We divide the diagram into three distinct regions: (gray) low LyC leakage,
(red) significant leakage through low-N_HI channels (ionization-bounded,
f^LyC_esc>10%), and (blue) significant through large
holes (density-bounded, f^LyC_esc>10%). The boundaries
come from Figure 13 of <cit.>, which shows these regions in the
– f_esc^LyC and A_f – f_esc^LyC planes.
We find that the strongest LyC leakers in CLASSY are the three galaxies – J0942+3547,
J1323-0312, J1545+0858 – in the blue region.
The profiles of these three galaxies also show residual fluxes at
trough: F_trough/F_cont= 1.36±0.07, 19.62±0.42, 0.17±0.12, respectively.
Their net trough flux supports the conclusion that these galaxies have LyC-thin sightlines
<cit.>. Based solely on their profile properties then, these galaxies
are likely strong LyC leakers. When we compare their location in Fig. <ref> to directly
confirmed LyC leakers, we find that their peak separation is as small as the smallest values
measured among directly confirmed LyC leakers <cit.>.
Many CLASSY galaxies are located in the gray-shaded region of Fig. <ref>,
suggesting they have lower LyC escape fractions than the three galaxies in the
blue zone. Three known leakers from <cit.> also
lie in the gray zone of Fig. <ref>, just 100 above the blue – gray boundary.
Based on this comparison to the properties of the known leakers, we suggest that
the CLASSY sample contains more LyC leakers than the (blue) shaded region indicates.
The <cit.> simulations zoom in on individual H ii
regions, so perhaps the boundary might shift 100 in more realistic
environments, i.e. those composed of multiple H ii regions.
To gain insight into the empirical boundary, we inspect the positions
of the other three CLASSY galaxies with net trough flux. Non-zero
trough flux in the emission-line profile requires a low-N_HI
column at the systemic velocity. We find three more galaxies with net trough
flux, and each has < 400 km s^-1. The galaxies are
J0944-0038, J1253-0312, and J1418+2102; their trough fluxes are
F_trough/F_cont=0.47±0.29, 0.22±0.07, 1.46±0.17, respectively.
We acknowledge that trough fluxes are sensitive to the spectral resolution,
which is not precisely known for the emission. We therefore
compared the trough width to the width of the red peak which represents an
upper limit on the unresolved linewidth. Four galaxies (0942+3547,
J1323-0312, J1545+0858, J1418+2102) show broader trough widths than
the peak widths, so these troughs are clearly resolved. For the other two
objects, J0944-0038 and J1253-0312, their trough widths are similar to
peak widths, so higher-resolution spectroscopy might find that we possibly
over-estimate their residual trough flux.
Consequently, we identified at least four CLASSY galaxies containing density-bounded channels.
We conclude that the empirical boundary between the blue and gray zones
lies closer to a peak separation of 400 , roughly 100 larger
than the blue-gray boundary suggested by the simulations.
Based solely on the properties of line profiles, we conclude that
four to six of the CLASSY galaxies (highlighted by red squares in Fig. <ref>)
are strong LyC leakers. Their red peaks have a low asymmetry, A_f < 3, which
indicates they are best described as density-bounded galaxies. In contrast,
even though they span the same range of peak separations, half of the directly
confirmed leakers have A_f > 3, suggesting their leakage is through ionization-bounded channels
in a multiphase medium.
§.§.§ Combining Perspectives from and O32
In the previous section, we have shown that the trough flux,
peak separation, and red peak asymmetry converge at the same selection
of galaxies with density-bounded holes in their neutral ISM.
Here we examine the ionization structure of these galaxies, as measured by optical nebular emission lines, to reveal the underlying relation between LyC leaking channels and ionization.
We adopt [O III] λ5007/[O II] λ3727 (O32) ratio, one of the most important ionization diagnostics <cit.>, where a high O32 ratio can indicate a density-bounded
galaxy[Two of our three best candidates for density-bounded galaxies, J1323-0312 and
J1545+0858, have the largest O32 ratios among the CLASSY sample (37.8 and 8.6, respectively).
On the other hand, J0942+3547 has a lower O32 ratio of 2.6.] <cit.>.
and O32
Intuitively, we expect a high escape fraction of photons from density-bounded galaxies.
Yet, in the top panel of Fig. <ref>, the O32 ratio shows no correlation
with (Spearman coefficient ∼ 0.04), contradicting the correlation observed among high-redshift galaxies <cit.> and among local dwarf galaxies <cit.>.
We argue here that the lack of correlation in our sample might result from the scattering of photons outside the COS aperture,
an effect that we argued produces DLA systems in many CLASSY spectra (see Sec. <ref>).
The slits used to observe high-redshift galaxies in <cit.> typically subtend 5 to 10 kpc, much larger than the physical scale subtended by the COS aperture for the lowest redshift targets.
Although the Lyman alpha Spectral Database <cit.> includes some low-redshift galaxies, the CLASSY sample has a lower median redshift than LASD, so scattering outside the COS aperture plausibly introduces a more serious bias.
To test this explanation, we restrict the analysis to the subsample with UV radius < 04, the radius of the unvignetted COS aperture and find a positive correlation; among the yellow points in Fig. <ref>, the Spearman coefficient of 0.22.
However, the galaxy distance might not be the only factor influencing scattering outside the spectroscopic aperture.
The escape fraction of higher redshift galaxies may also be significantly affected.
In the top panel of Fig. <ref>, we overplot measurements for Green Pea galaxies at redshift 0.1 to 0.4 <cit.>. We
add LyC leakers from <cit.> with extreme O32 ratios (ranging from 22 – 39).
Although the joint sample has a similar redshift range as <cit.>, it also shows no correlation between and O32 ratio.
A subset of the joint targets with a large O32 ratio has modest of ∼1%.
Thus, using to probe the density-bounded channels should always be aware of those exceptions, not only the aperture loss.
and O32
Consistent with previous studies <cit.>, the peak separation among CLASSY galaxies
is anti-correlated with the O32 ratio, as shown in the bottom panel of Fig. <ref>.
Excluding the galaxies with large UV radius (>04) does not change the correlation strength, and thus,
we conclude that the peak velocity measurements are only weakly affected by the aperture loss.
The profiles of LyC leakers with extreme O32 ratios of 22 – 39 from
<cit.> show ≈ 250 , similar to the J1545+0858 and J0942+3547 in CLASSY sample, consistent
with a minimum around 250 . The only data in Fig. <ref>
with lower is our new data point for J1323-0312.
The joint sample shows that high O32 galaxies always have narrow peak separations, while the low O32 galaxies spread a large range of . We argue
that a high O32 ratio traces a large global covering fraction of LyC-thin sightlines,
whereas the narrow peak separations appear when the covering fraction of LyC-thin sightlines
in our direction is high. Variations in the direction of LyC-thin sightlines relative
to our viewing angle, therefore, produce the scatter observed in the vs. O32 ratio
diagram.
The correlation between and O32, and the non-correlation between and O32 might hint that the different features probe photons from different channels.
This speculation is in line with the simulations of <cit.>.
When the pathways for LyC escape have a low covering fraction, the majority of photons
still need to escape through low-N_HI≈ 10^18 - 10^20cm^-2
channels, the emission line emerges with a broad width and large peak separation.
Meanwhile, a smaller fraction of photons will pass through the remaining columns which are optically thin to
the LyC (<10^18 cm^-2), and these sightlines contribute emission
with narrow lines and small peak-separation <cit.>.
It follows that the transition from ionization-bounded leakers to
density-bounded leakers is accompanied by a change in the shape of the
profiles (namely, the relative strength of the narrow and broad lines).
As the covering fraction of LyC thin holes increases, more
of the emergent flux is contributed by the component with narrow peaks.
When the intensities of two narrow peaks are larger than those of broad peaks, these LyC thin channels can determine while the covering fraction of channels with > 10^18 remains significant and continues to produce broad peaks with a wider separation.
Thus, in the case of a significant covering fraction of LyC-thin holes,
the peak separation is probing the H i column in LyC-thin holes and we expect O32 to increase as decreases.
However, in the case of no LyC leakage or a small LyC leakage, the
photons that pass through the columns > 10^18 cm^-2 dominate the profile (peaks and wings).
§.§ Outflow Velocity of Neutral ISM
<cit.> described an adiabatic galactic wind that could reach speeds of roughly 1000 . Theoretical models that explain the relation of this hot phase to the widely observed cool outflows have been a subject of studies for an extended period of time <cit.>. Photoionization modeling of the LIS absorption lines in CLASSY spectra indicates the outflowing component traces gas in which hydrogen is mostly ionized <cit.>.
Yet, the combined neutral and molecular phases transport as much (or more) mass than does the warm-ionized phase in the outflow from M82 <cit.>.
Since probes the portion of the outflow where hydrogen is neutral, outflow detection using complement studies of the highly-ionized outflow.
When photons scatter in outflowing gas, the resonance center will move blueward with respect to the rest-frame line center.
Consequently, the outflow velocity is imprinted on the profile.
Here, we suggest that the indicates the average outflow velocity v of neutral clouds, where -v corresponds to the largest optical depth <cit.>.
In this section, we first compare trough velocity against the Doppler shift of LIS lines.
Then we compare the outflow speeds of neutral ISM in low-N_HI channels, trough velocity ,
to tracers of high-N_HI clouds.
Finally, adopting as a direct measurement of the mean Doppler shift of the neutral gas, we revisit why radiative-transfer modeling is typically driven towards a shell velocity faster than .
§.§.§ Trough Velocity and LIS Velocity
Resonance UV absorption lines, e.g., Si ii and C ii, have been extensively used to measure outflow speeds <cit.>.
Here we focus on Si ii λ1260, which is well measured by the CLASSY collaboration.
The Doppler shifts of Si ii in the CLASSY sample have been measured using two different methods.
<cit.> use a double-Gaussian profile to deblend the outflow component from the static ISM component of Si ii and find that the outflow component is mostly ionized.
On the other hand, Parker et al. (in prep) fit a single-Voigt profile to determine the average velocity of all LIS absorbers.
As the LIS lines can also arise from the neutral ISM, the Parker measurements should include the contribution of neutral ISM.
Conceptually, if the LIS absorber is dominated by the static ISM, the Parker measurement, which is close to 0 , should be distinct from the Xu measurement.
But if the LIS absorber is dominated by the outflow component, the Parker measurement should be similar to the Xu measurement.
Fig. <ref> presents the comparisons of to both LIS outflow measurements derived by the two methods[J0808+3948 is excluded because its polycyclic aromatic hydrocarbon feature suggests it might be an AGN <cit.>.].
Directly comparing the two LIS outflow measurements in the top and bottom panels, we notice the positions of two objects (J1416+1223, J0938+5428) shift significantly.
Parker et al. (in prep) derive a velocity close to 0 km s^-1, but the outflow velocities derived in <cit.> can reach several hundred km s^-1, suggesting the LIS absorber of these two galaxies are mainly static.
Galaxies that shift between the two panels have substantial absorption at v=0, which we attribute to the static ism.
Secondly, we see that the Si ii velocity measured by Parker et al. (in prep) shows a better agreement with , particularly the galaxies with both and DLAs (blue squares) in the top panel of Fig. <ref>.
This suggests that in those galaxies, the Si ii absorbers (in both outflow and static) contain a significant fraction of neutral hydrogen, though <cit.> suggest that the Si ii in the outflow traces mostly ionized gas.
However, looking at the galaxies with no DLAs (red circles in Fig. <ref>), their Si ii velocities disagree with in both two panels.
The three most deviant circles (J0021+0052, J0926+4427, J1429+0643) show that their are close to 0 but the Si ii velocities are ≤ -200 .
We find that their Si ii line profiles are dominated by the outflow component: they
have very little absorption at the systemic velocity, so the velocity is not sensitive to the measurement method.
This suggests that the Si ii absorption comes mostly from the ionized gas in these galaxies, similar to <cit.>.
§.§.§ Trough Velocity and DLA Velocity
In the top panel of Fig. <ref>, we compare the trough velocity and the velocity of high-N_HI clouds (i.e., DLA system velocity probed by O i absorption line).
It is intriguing to see such a good agreement between these two independent measurements, suggesting that the low-N_HI channels have the same velocity as the high-N_HI channels.
In the bottom panel of Fig. <ref>, we further find that DLA velocity agrees with Si ii velocity.
Here, we include the galaxies, which do not have emission lines, as the circles.
They are consistent with those galaxies which have both emission lines and DLA systems.
This hints that, for the galaxies with DLA systems, the intrinsic reason for the correlation between Si ii velocity and is that the Si ii mainly traces the high-N_HI clouds and the high-N_HI clouds have similar velocity as the low-N_HI clouds.
§.§.§ Revisiting Outflow Velocity Discrepancy
In this section we discuss the outflow velocity discrepancy using the same profile fittings as Sec. <ref> and propose a new explanation of the discrepancies.
Here we adopt the as the intrinsic outflow velocity since it traces the neutral ISM which scatters the photons.
First, we directly compare the measured based on the spectroscopic redshift
and the outflow velocities v_tlac estimated by the shell model.
In the top panel of Fig. <ref>, we present the comparison for both
the second profile fitting (redshift-unconstrained) and the third profile
fitting (redshift-constrained). Though a clear correlation between
and v_tlac can be seen, v_tlac is larger by 0 – 200
and 0 – 140 than the for the second and third fittings,
respectively.
We speculate that the reason for this discrepancy is a `redshift error' required by the model fitting. To test this idea, we shift our measurements to the fictitious reference frame chosen by the fitted redshift.
The bottom panel of Fig <ref> shows the measurements in the reference frames defined by the second and third fittings.
We have shifted the measurements by
v^Lyα_trough,z_tlac = v^Lyα_trough - (z_tlac - z_spec)× c,
where c is the speed of light.
The new correlations are significantly improved and close to the 1:1 relationship.
Especially for the redshift-unconstrained fitting (second attempt), and v_tlac agree well with each other.
These results confirm that we should compare and v_tlac in a common redshift frame. This also confirms that the outflow velocity and the redshift are coupled in the shell model:
λ^Lyα_trough/(1+z)×λ_Lyα-1 = -v^outflow/c,
where λ^Lyα_trough is the wavelength of trough and λ_Lyα the rest-frame wavelength of .
Once the redshift of the shell model is fixed, the model outflow velocity is also determined by the Doppler offset of the observed trough with respect to the model redshift.
Thus, the redshift and outflow velocity discrepancies are the "two sides of the same coin".
The preferred larger outflow velocity by may hint that the observed B/R ratio is lower than the intrinsic B/R ratio.
Moreover, as we discussed in Sec. <ref>, the observed B/R ratio can be biased by the aperture loss.
Thus, this inspires us to connect the discrepancies to the aperture loss.
We, therefore, propose an explanation for the discrepancies.
Since the aperture loss modifies the B/R ratio to a lower value and the
B/R ratio is tightly anti-correlated with outflow velocity, to achieve
the smaller observed B/R ratio, the shell model will suggest a larger outflow velocity.
Meanwhile, a higher systematic redshift is required to match the trough velocity to the outflow velocity (Eq. <ref>).
Thus, the best-fit redshift and outflow velocity from the shell model are larger than that observed from the spectra.
The aperture loss has a non-negligible impact on the profile and should always be considered when interpreting profile.
§.§ A Schema of the Neutral ISM
In this section, we summarize our interpretation of ISM structure from the previous sections.
We have demonstrated that the ISM in CLASSY galaxies is inhomogeneous, consisting of high-N_HI, low-N_HI, and even LyC-thin regions, based on the clear separation between the DLA and emission (see Sec. <ref>), the non-zero residual flux at trough, and small peak separation (see Sec. <ref>).
In the left panel of <ref>, we plot a schema of the neutral ISM for illustration. For simplicity, we adopt a continuous shell model.
The low-N_HI and high-N_HI paths are shown as light blue and dark blue, respectively.
We also use two gray shades to indicate the halos missed due to the aperture effect.
In the right panel, we zoom in to show the radiative transfer in a small slab.
The green lines indicate the photons and the gray lines indicate the continuum photons.
Although the radiative process is highly non-linear and non-additive, the radiative transfer fitting results suggest that we can take the emission and DLA system apart.
The DLA system can be well fitted by a partial-covering Voigt profile with a high-N_HI and the emission normalized by the uncovered continuum can be well fitted by the shell model with a low-N_HI.
This clear separation between emission and DLA system indicates that the exchange between low-N_HI path and high-N_HI path should be negligible, as we discussed in Sec. <ref>.
Only very few photons which are injected into one region can travel
to another region and thus, the radiative processes in two different regions are independent.
This is feasible because of two reasons: (1) the possibility of a photon traveling from low-N_HI path to high-N_HI path is very small, as most of which are just “reflected” by the surface between two channels <cit.>; (2) the photons including the underlying continuum photons which are injected into high-N_HI paths are mostly scattered to much larger impact parameters <cit.>, thus, most of which are missed due to the aperture effect and leave a DLA system.
Thus, only photons escape through low-N_HI regions can be observed and the emergent profile is a combination of spectra from two regions, as illustrated by the green and gray lines in Fig. <ref>, and has a profile of emission in the bottom of DLA system.
In the left panel of Fig. <ref>, we plot several low-N_HI channels in different directions.
Although all of those low-N_HI channels can allow the escape of photons, only the channels exposed to the COS aperture (i.e., horizontal one in Fig. <ref>) can contribute to the observed emission line. Because for the photons which are initially injected into low-N_HI channels in other directions, they still need to penetrate the high-N_HI paths before reaching us.
We have proposed a scenario that the aperture loss is responsible for those unexpected profiles of emission in the bottom of DLA system in the CLASSY sample.
In this work, we also find that the DLA absorber (neutral gas in high-N_HI paths) has a similar systematic velocity as the neutral gas in the low-N_HI paths.
However, the ionized gas, traced by the outflowing component of Si ii absorption line, has a generally larger velocity compared with the neutral gas in the low-N_HI paths.
Using three LyC leakage diagnostics, we find that at least three galaxies in the CLASSY sample are LyC leaker candidates.
Thus, in the right panel of Fig. <ref>, we use yellow to indicate the possible LyC-thin channels in the ISM, through which the photons can easily escape without much resonant scattering.
By comparing the with O32 ratio, we conclude that the O32 ratio is tracing the covering fraction of LyC-thin channels, consistent with those known LyC leakers <cit.>. The covering fraction increases as the O32 ratio increases, and thus, the probability of observing small increases.
§ SUMMARY & CONCLUSIONS
In this paper, we extracted high-resolution line profiles from CLASSY spectra of 45 EoR analogs.
These HST COS/G130M spectra show a wide variety of profiles, including damped absorption,
emission in damped absorption (DLA) profiles, P-Cygni profiles, and pure emission.
We attribute the damped absorption to photons being scattered out of the spectroscopic aperture, and
we argue that the especially large diversity among CLASSY profiles can be largely attributed to
large range of physical scales subtended by the COS aperture, a little over 100 pc up to nearly 8 kpc.
We separated the DLA and emission components of the profiles. Specifically, we adopted the precisely measured
Doppler shifts of the O i absorption components as priors for the Doppler shift of each broad DLA
profile, and we fitted the damped absorption with modified Voigt profiles. After subtracting the
stellar continuum and the DLA profile, we modeled the emission profile and the appropriate
underlying continuum using the shell model. For the first time, we measure the properties in the neutral
shell traversed by the
emergent emission, and the conditions in the high column density clouds, in the same sample of galaxies.
For double-peaked emission line profiles, we defined the Doppler shift of the minimum between the two
emission lines as the trough velocity, which we compared to the Doppler shifts of LIS absorption lines
and the DLA. Our results are summarized below:
* The emission in the bottom of the DLA profile reveals the inhomogeneity of the ISM and the
outflows. The DLA profile and emission line can be surprisingly well fitted by simply splitting
a geometric covering factor between the high-column density sightlines and the lower-N_HI channels through which photons escape. This suggests little exchange between high- and
low-N_HI paths. Combining the sightlines probed by emission lines with those
producing damped absorption, the net distribution of column densities is bimodal and therefore
qualitatively similar to the distributions predicted by numerical simulations of H i regions
<cit.>. It is important to note, however, that this observed distribution
is offset to higher N_HI compared with the simulations. This discrepancy could arise
from gas on larger spatial scales than the simulations include, or from structural differences
in the star-forming complexes; but, whatever its origin, an understanding of the offset will
better inform our understanding of the channels through which not only but also LyC,
photons escape from galaxies.
* We find that the Doppler shift of the trough velocity matches that of the Si ii velocity in most galaxies with DLAs, suggesting that the Si ii absorber in those galaxies are mainly in neutral phase.
However, for galaxies without DLA systems, the trough velocity is always smaller than the
Si ii velocity, suggesting Si ii tracing a more ionized phase of the outflow, consistent with <cit.>.
Thus, the Si ii absorbers are multi-phase, including neutral hydrogen in addition to the mostly-ionized phase.
Combining the and Si ii, we are able to identify the ionization of Si ii absorbers.
Our comparison also suggests that the trough velocity directly measures the average velocity of neutral gas in the static ISM and outflows.
* In spectra with a DLA, the trough velocity agrees well with the DLA velocity (O i velocity), suggesting that
the high-N_HI clouds have similar kinematics as low-N_HI clouds.
Further, the Si ii also agrees well with the DLA velocity, even for galaxies without emission.
Thus, we conclude that Si ii mainly traces the neutral gas in high-N_HI columns if the galaxies show DLAs.
* Motivated by the numerical simulations of <cit.>,
we combine the measurements of peak separation and red peak asymmetry in a diagnostic
diagram that differentiates the type of channels for LyC leakage. Comparing the diagram
with the known LyC leakers, we suggest that the boundary for distinguishing substantial leakage
from small leakage is a peak separation less than ∼400 . In the case of leakage, or
equivalently small peak separation, then the red peak asymmetry parameter distinguishes holes,
where A_f > 3, from the more symmetric profiles generated by full breaks.
Six CLASSY galaxies are identified as the density-bounded LyC leakers by this technique, agreeing with
the selection of net trough flux.
The inferred properties of the LyC-thin sightlines depend on galaxy orientation, whereas
the [O iii]/[O ii] ratio offers a sightline-independent perspsective.
We confirm the presence of an inverse relation between peak separation and the
[O iii]/[O ii] ratio, as has been noted previously <cit.>.
* Similar to <cit.>, we find that the fitted redshift is always larger than the
spectroscopic redshift and the fitted outflow velocity is larger by 10 – 200 than the trough velocity. The connection between the trough velocity and the outflow velocity offers
new insight into the origin of those discrepancies, which we suggest are not adequately explained
by parameter degeneracies <cit.>. We argue instead that aperture vignetting is the primary
source of the discrepancies. The COS aperture vignets the blue-shifted peak more than the
red-shifted peak, resulting in a lower blue-to-red peak ratio. To match the lower blue-to-red peak ratio,
the radiative transfer model requires a higher outflow velocity and thus, a larger redshift to match the
outflow velocity to trough velocity.
Our results underline the sensitivity of profiles to aperture vignetting.
The COS aperture not only excludes a large fraction of photons, it modifies the profile.
Like many CLASSY targets, the composite spectra of star-forming galaxies at z ∼ 1.8 – 3.5 show DLA systems as well
<cit.>. An important difference, however, is that the typical slit width used in
ground-based spectroscopy, 12, corresponds to ∼10 kpc. The COS aperture subtends a
comparable physical scale only for the most distant Lyman Break Analogs in CLASSY, and their
COS spectra do not show DLAs. Nonetheless, our analysis suggests the DLAs appear in the z∼ 2
spectra because the escape on spatial scales is larger than the slit width.
An important implication of this paper is that aperture vignetting could strongly affect
recent JWST observation of EoR galaxies using Near Infrared Spectrograph (NIRSPec) slit mode, of which the
slit width is just 02, corresponding to only ∼ 1 kpc.
In this paper, we leveraged these aperture effects, recognizing an opportunity to characterize
the properties of the low-N_HI channels and high-N_HI clouds in the same
set of galaxy sightlines. To fully understand the connection between the observed profile and
LyC leakage, the radiative transfer simulations will need to predict the spatial variations in
profile shape. The extracted profiles used in this work, including
the DLA profiles and the best-fit shell model spectra, can be downloaded from the CLASSY High Level Science
Products database, which is developed and maintained at STScI, Baltimore, USA.[
Data will appear at <https://archive.stsci.edu/hlsp/classy> after acceptance by the ApJ. The data product
can be found here (<https://drive.google.com/drive/folders/1NCUyr1vQ10z4BZuGBqsBuIjL0dWJnmZ1?usp=sharing>) during the review
period.]
§ ACKNOWLEDGEMENTS
The CLASSY team is grateful for the support that was provided by NASA through grant HST-GO-15840,
from the Space Telescope Science Institute, which is
operated by the Associations of Universities for Research in Astronomy, Incorporated, under NASA
contract NAS5-26555. CLM thanks the NSF for support through AST-1817125.
BLJ thanks support from the European Space Agency (ESA).
The CLASSY collaboration extends special gratitude to the Lorentz Center for
useful discussions during the "Characterizing Galaxies with Spectroscopy with a view for JWST" 2017 workshop that
led to the formation of the CLASSY collaboration and survey.
HST (COS)
astropy (The Astropy Collaboration 2013, 2018), CalCOS (STScI), python
§ BEST-FIT SPECTRA
Fig. <ref> and <ref> present the best-fit spectra obtained using approaches described in Sec. <ref> and <ref>, respectively.
aasjournal
c c c c c c c c c c c c
Measurements
900pt
object f_Lyα log L_Lyα EW_Lyα f^Lyα_esc A_f Δ v_Lyα v^blue_Lyα v^red_Lyα v^trough_Lyα f^blue_Lyα f^red_Lyα
10^-15 erg s^-1 cm^-2 erg s^-1 Å % km s^-1 km s^-1 km s^-1 km s^-1 10^-15 erg s^-1 cm^-2 10^-15 erg s^-1 cm^-2
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12)
12cDouble Peaks
J0021+0052 144.56±1.43 42.5 29.04±0.29 25±0.45 2.91±0.04 571±54 -419±53 152±12 -27±20 8.4±0.5 136.4±1.4
J0808+3948 64.31±0.52 42.1 15.02±0.12 27±0.22 1.29±0.12 507±28 -470±26 37±9 -312±6 3.2±0.2 61.2±0.5
J0926+4427 64.64±0.56 42.8 40.65±0.35 35±0.67 1.42±0.13 427±52 -203±45 224±25 -47±17 7.5±0.2 57.1±0.5
J0938+5428 21.14±0.52 41.7 4.06±0.10 3.2±0.083 1.93±0.19 669±52 -296±41 373±31 116±32 7.4±0.3 13.8±0.4
J0942+3547 97.61±0.31 40.7 17.95±0.06 18±0.093 1.53±0.15 267±16 -113±14 154±7 -16±7 14.6±0.3 82.6±0.4
J0944-0038 20.47±0.43 39.2 9.95±0.21 4.1±0.085 0.86±0.35 416±65 -150±61 267±23 34±82 3.4±0.7 17.1±0.8
J0944+3442 0.43±0.09 38.6 0.56±0.11 0.82±0.17 1.71±0.59 531±99 -273±67 257±76 -20±150 0.1±0.1 0.4±0.1
J1016+3754 146.06±1.74 39.8 15.42±0.18 12±0.16 1.51±0.21 404±45 -230±43 175±13 -34±22 12.2±0.9 133.9±1.4
J1024+0524 54.13±0.50 41.1 8.72±0.08 5±0.097 2.87±0.08 464±36 -338±35 126±8 -64±18 2.1±0.3 52.1±0.5
J1025+3622 53.28±0.62 42.3 21.95±0.25 17±0.25 1.70±0.15 469±43 -263±39 206±17 -100±25 4.5±0.2 48.8±0.6
J1044+0353 1.55±0.08 38.8 0.90±0.05 0.096±0.005 1.84±0.36 425±119 -293±111 132±45 -123±98 0.2±0.1 1.3±0.1
J1105+4444 2.52±0.18 39.4 0.53±0.04 0.072±0.0051 ... 999±109 -517±76 482±76 -204±244 0.4±0.2 2.2±0.2
J1119+5130 2.30±0.16 38.1 0.71±0.05 0.78±0.053 4.20±0.27 649±76 -337±67 312±34 -67±108 1.2±0.2 1.1±0.1
J1148+2546 12.34±0.23 40.7 5.62±0.10 7.4±0.14 1.92±0.46 717±97 -450±73 268±63 -132±58 0.4±0.1 12.0±0.2
J1200+1343 80.54±0.44 41.9 56.87±0.31 9.3±0.11 2.68±0.05 530±21 -394±20 136±5 -48±20 9.0±0.2 71.6±0.4
J1253-0312 338.94±1.21 41.6 32.14±0.12 4.4±0.032 1.48±0.11 433±14 -214±11 219±8 -46±9 37.7±0.5 301.1±1.1
J1323-0132 190.14±0.56 41.3 81.19±0.24 21±0.1 1.78±0.06 168±12 -95±12 74±1 -35±7 45.0±1.5 142.7±1.5
J1416+1223 11.70±0.47 41.7 3.26±0.13 3±0.13 1.75±0.28 613±91 -315±80 298±43 35±51 6.0±0.4 5.7±0.3
J1418+2102 26.76±0.22 39.7 19.14±0.16 2.4±0.021 1.10±0.18 403±26 -122±25 281±9 25±9 7.6±0.2 19.0±0.2
J1428+1653 38.94±0.82 42.6 12.32±0.26 27±1 3.34±0.34 416±84 -328±76 89±38 -109±66 2.6±0.3 36.4±0.8
J1429+0643 67.53±0.90 42.8 33.22±0.44 11±0.18 3.56±0.15 545±72 -292±67 253±25 54±33 15.2±0.5 52.4±0.7
J1448-0110 0.43±0.13 38.8 0.09±0.03 0.027±0.0083 ... 395±153 -354±124 40±91 -283±134 0.1±0.1 0.3±0.1
J1521+0759 27.21±0.64 41.8 4.85±0.11 12±0.52 3.27±0.09 404±48 -247±45 157±16 -68±29 -1.0±0.4 28.4±0.7
J1545+0858 160.58±1.02 41.7 29.26±0.19 7.5±0.048 2.75±0.08 284±14 -122±10 163±10 -27±15 6.4±0.3 154.2±1.0
12cSingle Peak / P-Cygni
J0036-3333 175.93±1.20 41.1 7.13±0.05 9.2±0.063 ... ... ... 100±5 ... ... 175.9±1.2
J0940+2935 0.58±0.06 36.8 0.34±0.03 0.46±0.044 ... ... ... 273±57 ... ... 0.6±0.1
J1112+5503 12.43±0.45 41.8 5.45±0.20 3.8±0.15 ... ... ... 143±47 ... ... 12.4±0.4
J1144+4012 2.50±0.21 41.0 1.75±0.15 1.6±0.14 ... ... ... 318±70 ... ... 2.5±0.2
J1157+3220 389.02±2.53 41.1 21.63±0.14 23±0.2 ... ... ... 50±13 ... ... 389.0±2.5
J1225+6109 0.50±0.21 36.8 0.04±0.02 0.016±0.0065 ... ... ... 52±63 ... ... 0.5±0.2
J1314+3452 0.85±0.06 37.2 0.21±0.01 0.027±0.0018 ... ... ... 210±47 ... ... 0.8±0.1
J1359+5726 72.71±0.68 41.2 8.71±0.08 6.7±0.081 ... ... ... 148±16 ... ... 72.7±0.7
J1525+0757 67.41±1.21 42.0 14.92±0.27 16±0.43 ... ... ... 82±7 ... ... 67.4±1.2
J1612+0817 36.97±0.83 42.3 12.71±0.29 7±0.18 ... ... ... 114±14 ... ... 37.0±0.8
(1) object name; (2) flux; (3) luminosity; (4) equivalent width; (5) escape fraction; (6) red peak asymmetry; (7) peak separation; (8) blue peak velocity offset; (9) red peak velocity offset; (10) trough velocity offset; (11) blue peak flux; (12) red peak flux. We note that the luminosity distances of some galaxies used in this work are different with those in <cit.> because of the correction of cosmic flow. The properties (e.g., stellar mass, star formation rate) of those galaxies which rely on the luminosities are scaled accordingly.
c c c c c c c c c
Ancillary data
object f_1500 M_1500 Z_neb log M_⋆ E(B-V) O32 v^outflow_Si II r_50
10^-15 erg s^-1 cm^-2 M_⊙ km s^-1
(1) (2) (3) (4) (5) (6) (7) (8) (9)
J0021+0052 3.94 -20.55 8.17±0.07 9.09^+0.18_-0.38 0.13±0.006 2.0±0.1 231^+77_-77 0.25
J0036-3333 16.60 -18.34 8.21±0.17 9.09^+0.26_-0.23 0.30±0.012 1.1±0.1 157^+22_-22 0.28
J0127-0619 4.04 -13.58 7.68±0.02 8.63^+0.18_-0.15 0.48±0.006 1.1±0.1 ... 0.15
J0144+0453 1.87 -12.63 7.76±0.02 7.52^+0.24_-0.29 0.04±0.030 2.1±0.1 48^+16_-16 3.54
J0337-0502 7.99 -16.60 7.46±0.04 7.01^+0.24_-0.21 0.05±0.006 6.2±0.2 ... 1.62
J0405-3648 0.96 -10.90 7.04±0.05 6.60^+0.28_-0.28 0.11±0.005 0.6±0.1 ... 6.43
J0808+3948 3.42 -20.23 8.77±0.12 9.12^+0.30_-0.17 0.24±0.070 0.8±0.1 646^+65_-65 0.08
J0823+2806 3.85 -18.86 8.28±0.01 9.38^+0.33_-0.19 0.21±0.004 2.0±0.1 136^+45_-45 0.28
J0926+4427 1.14 -20.64 8.08±0.02 8.76^+0.30_-0.26 0.10±0.008 3.1±0.1 353^+52_-52 0.23
J0934+5514 15.10 -14.05 6.98±0.01 6.25^+0.15_-0.20 0.07±0.007 8.7±0.1 112^+37_-37 1.53
J0938+5428 3.56 -20.53 8.25±0.02 9.15^+0.18_-0.29 0.13±0.006 1.9±0.1 215^+72_-72 0.28
J0940+2935 1.45 -11.14 7.66±0.07 6.80^+0.23_-0.40 0.06±0.010 0.7±0.1 102^+34_-34 3.06
J0942+3547 3.80 -16.30 8.13±0.03 7.56^+0.21_-0.29 0.06±0.011 2.6±0.1 97^+26_-26 0.33
J0944-0038 1.40 -13.07 7.83±0.01 6.89^+0.44_-0.25 0.16±0.010 2.9±0.1 64^+21_-21 2.34
J0944+3442 0.69 -15.06 7.62±0.11 8.19^+0.40_-0.23 0.16±0.013 1.4±0.1 ... 3.74
J1016+3754 7.07 -14.43 7.56±0.01 6.77^+0.27_-0.22 0.07±0.012 4.6±0.2 116^+31_-31 1.52
J1024+0524 4.50 -18.20 7.84±0.03 7.88^+0.37_-0.24 0.10±0.016 2.1±0.1 94^+12_-12 0.40
J1025+3622 1.81 -20.30 8.13±0.01 8.87^+0.25_-0.27 0.09±0.006 2.4±0.1 155^+24_-24 0.35
J1044+0353 1.70 -15.25 7.45±0.03 6.84^+0.41_-0.26 0.08±0.007 6.8±0.1 52^+12_-12 0.38
J1105+4444 4.68 -17.28 8.23±0.01 8.98^+0.29_-0.24 0.17±0.005 2.0±0.1 115^+23_-23 4.11
J1112+5503 1.91 -20.45 8.45±0.06 9.59^+0.33_-0.19 0.23±0.016 0.9±0.1 349^+107_-107 0.20
J1119+5130 2.63 -13.54 7.57±0.04 6.81^+0.15_-0.28 0.10±0.008 2.0±0.1 65^+22_-22 2.18
J1129+2034 1.87 -13.62 8.28±0.04 8.20^+0.37_-0.27 0.23±0.011 1.8±0.1 51^+17_-17 0.38
J1132+5722 2.57 -13.69 7.58±0.08 7.32^+0.23_-0.26 0.10±0.008 0.8±0.1 ... 0.84
J1132+1411 1.75 -15.75 8.25±0.01 8.67^+0.28_-0.19 0.13±0.008 2.7±0.1 60^+10_-10 8.86
J1144+4012 1.20 -19.86 8.43±0.20 9.89^+0.18_-0.29 0.22±0.010 0.6±0.1 246^+33_-33 0.40
J1148+2546 2.07 -18.03 7.94±0.01 8.13^+0.34_-0.24 0.10±0.021 3.7±0.1 95^+19_-19 1.31
J1150+1501 12.60 -13.71 8.14±0.01 6.83^+0.28_-0.30 0.04±0.004 2.3±0.1 67^+22_-22 1.29
J1157+3220 14.40 -17.27 8.43±0.02 9.08^+0.32_-0.18 0.08±0.006 1.2±0.1 238^+49_-49 2.89
J1200+1343 1.38 -18.53 8.26±0.02 8.12^+0.47_-0.42 0.15±0.006 5.1±0.1 192^+13_-13 0.18
J1225+6109 9.50 -13.28 7.97±0.01 7.09^+0.34_-0.24 0.11±0.005 4.7±0.1 51^+17_-17 2.91
J1253-0312 9.11 -18.19 8.06±0.01 7.66^+0.51_-0.23 0.16±0.008 8.0±0.2 113^+38_-38 0.85
J1314+3452 3.72 -12.65 8.26±0.01 7.53^+0.30_-0.21 0.14±0.006 2.3±0.1 62^+21_-21 0.30
J1323-0132 1.33 -15.94 7.71±0.04 6.29^+0.26_-0.10 0.13±0.042 37.8±3.0 ... 0.23
J1359+5726 6.34 -18.53 7.98±0.01 8.39^+0.31_-0.26 0.09±0.006 2.6±0.1 161^+23_-23 1.10
J1416+1223 2.62 -20.63 8.53±0.11 9.59^+0.32_-0.26 0.25±0.008 0.8±0.1 398^+68_-68 0.13
J1418+2102 1.17 -13.99 7.75±0.02 6.26^+0.49_-0.35 0.08±0.006 4.7±0.1 51^+7_-7 0.40
J1428+1653 1.25 -20.75 8.33±0.05 9.56^+0.15_-0.23 0.14±0.008 1.2±0.1 140^+25_-25 0.35
J1429+0643 1.62 -20.92 8.10±0.03 8.80^+0.35_-0.21 0.12±0.012 4.2±0.2 230^+51_-51 0.15
J1444+4237 2.08 -11.33 7.64±0.02 6.39^+0.17_-0.17 0.08±0.053 4.1±0.1 54^+18_-18 8.20
J1448-0110 4.08 -17.55 8.13±0.01 7.58^+0.41_-0.24 0.15±0.005 8.0±0.1 145^+43_-43 0.23
J1521+0759 3.52 -20.33 8.31±0.14 9.00^+0.29_-0.30 0.15±0.008 1.5±0.1 161^+54_-54 0.28
J1525+0757 3.52 -19.83 8.33±0.04 10.06^+0.28_-0.42 0.25±0.008 0.5±0.1 408^+28_-28 0.25
J1545+0858 4.37 -18.40 7.75±0.03 7.50^+0.43_-0.26 0.11±0.036 8.6±0.3 113^+33_-33 0.33
J1612+0817 2.70 -21.12 8.18±0.19 9.78^+0.28_-0.26 0.29±0.008 0.7±0.1 459^+63_-63 0.20
(1) object name; (2) UV flux at 1500 Å from <cit.>; (3) UV absolute magnitude at 1500 Å; (4) metallicity from <cit.>; (5) stellar mass; (6) dust extinction from <cit.>; (7) O32 ratio; (8) velocity of Si ii absorption line; (9) half light radius from <cit.>.
c c c c c c c c
Best-fit parameters of the second attempt
object z_tlac v_exp log N_HI log T log τ σ_i EW_i
km s^-1 K km s ^-1 Å
(1) (2) (3) (4) (5) (6) (7) (8)
J0021+0052 0.098902 214^+1_-2 18.79^+0.11_-0.08 3.8^+0.2_-0.1 -2.05^+1.01_-0.11 117^+1_-1 16.8^+0.8_-0.7
J0036-3333 0.020939 207^+2_-1 18.68^+0.09_-0.06 3.5^+0.2_-0.1 -2.10^+1.13_-0.06 93^+1_-1 6.6^+0.5_-0.1
J0808+3948 0.091384 365^+1_-1 16.76^+0.08_-0.04 3.4^+0.1_-0.1 -1.57^+0.22_-0.11 103^+1_-1 8.7^+0.1_-0.1
J0926+4427 0.180817 131^+2_-3 19.23^+0.05_-0.08 3.7^+0.1_-0.1 -1.74^+0.11_-0.21 248^+2_-3 31.8^+1.0_-1.0
J0938+5428 0.102513 21^+4_-2 19.79^+0.08_-0.08 4.2^+0.2_-0.1 -0.68^+0.74_-0.87 308^+5_-4 74.5^+5.3_-4.4
J0942+3547 0.015121 86^+1_-1 18.20^+0.06_-0.06 3.1^+0.1_-0.1 -3.31^+0.38_-0.09 167^+1_-1 18.3^+0.1_-0.1
J0944-0038 0.005187 119^+4_-4 18.60^+0.09_-0.07 3.4^+0.2_-0.2 -1.35^+0.20_-0.12 218^+3_-4 2130.2^+1851.2_-1159.2
J0944+3442 0.020226 72^+18_-17 19.44^+0.21_-0.23 4.1^+0.7_-0.8 0.44^+0.46_-0.43 182^+12_-11 39.6^+21.4_-14.0
J1016+3754 0.004131 96^+2_-1 18.85^+0.07_-0.11 4.3^+0.1_-0.2 0.10^+0.80_-0.75 142^+1_-2 26.2^+1.9_-1.9
J1024+0524 0.033425 176^+2_-1 19.01^+0.08_-0.09 5.1^+0.1_-0.2 -1.89^+0.92_-0.16 82^+1_-1 16.2^+0.6_-0.4
J1025+3622 0.126786 167^+3_-2 18.99^+0.09_-0.07 4.2^+0.2_-0.1 -0.88^+0.40_-0.55 245^+3_-3 41.3^+2.1_-1.9
J1044+0353 0.013070 175^+10_-15 18.18^+0.20_-0.19 3.5^+0.3_-0.4 -1.14^+0.25_-0.10 248^+14_-9 9.9^+0.9_-0.7
J1112+5503 0.131707 210^+3_-3 19.55^+0.10_-0.05 4.6^+0.3_-0.1 0.55^+1.10_-1.17 270^+3_-4 20.9^+1.5_-1.6
J1119+5130 0.004536 0^+2_-1 20.42^+0.24_-0.65 4.0^+0.4_-0.3 -1.80^+1.21_-0.03 316^+10_-8 6.5^+6.6_-2.4
J1144+4012 0.126832 108^+11_-8 20.20^+0.07_-0.08 4.9^+0.3_-0.7 0.40^+0.46_-0.52 259^+9_-11 281.6^+50.7_-41.7
J1148+2546 0.045710 279^+5_-4 19.19^+0.09_-0.07 3.8^+0.2_-0.2 0.53^+0.67_-0.56 281^+4_-4 46.3^+3.7_-3.6
J1157+3220 0.010230 163^+1_-2 20.06^+0.04_-0.09 5.0^+0.1_-0.2 0.69^+1.98_-1.60 191^+1_-2 200.1^+2.7_-3.5
J1200+1343 0.066942 173^+1_-2 16.56^+0.09_-0.05 5.4^+0.1_-0.1 -2.40^+0.65_-0.12 266^+1_-1 53.6^+1.7_-1.8
J1253-0312 0.023087 184^+0_-1 18.67^+0.03_-0.06 4.3^+0.1_-0.1 -2.70^+0.48_-0.00 256^+1_-1 31.2^+0.5_-0.4
J1323-0132 0.022534 37^+1_-1 17.93^+0.05_-0.02 3.1^+0.0_-0.1 -2.40^+0.24_-0.12 121^+0_-0 75.7^+1.1_-1.1
J1359+5726 0.034107 205^+2_-1 19.03^+0.13_-0.09 4.7^+0.1_-0.2 -1.11^+0.10_-0.47 128^+2_-2 51.8^+2.5_-2.2
J1416+1223 0.123181 0^+2_-2 19.59^+0.09_-0.08 4.6^+0.2_-0.3 -0.22^+0.70_-0.73 289^+6_-6 49.5^+4.1_-3.9
J1418+2102 0.009016 98^+2_-2 18.18^+0.09_-0.06 3.0^+0.1_-0.1 0.68^+1.65_-1.61 230^+1_-1 101.8^+4.8_-4.9
J1428+1653 0.181780 121^+3_-4 18.80^+0.10_-0.07 3.5^+0.1_-0.2 -1.12^+0.08_-0.40 52^+1_-1 17.8^+1.9_-1.0
J1429+0643 0.173984 112^+2_-3 19.21^+0.08_-0.09 3.1^+0.2_-0.2 -0.38^+0.67_-0.82 323^+5_-5 46.8^+3.6_-3.1
J1521+0759 0.094771 215^+2_-2 19.08^+0.12_-0.11 3.0^+0.2_-0.1 -1.04^+0.16_-0.28 93^+2_-2 9.2^+1.8_-0.6
J1525+0757 0.075913 136^+2_-1 18.58^+0.09_-0.06 4.5^+0.2_-0.1 -1.39^+0.00_-0.44 52^+1_-1 19.2^+1.2_-1.0
J1545+0858 0.038336 194^+1_-2 18.41^+0.07_-0.10 3.0^+0.2_-0.1 0.69^+1.91_-1.58 156^+1_-1 55.3^+1.5_-1.8
J1612+0817 0.149267 246^+3_-2 19.61^+0.08_-0.10 5.3^+0.2_-0.2 0.09^+1.03_-1.11 112^+2_-2 35.6^+2.3_-2.0
(1) object name; (2) redshift estimated by the shell model; (3) outflow velocity of expanding shell; (4) H i column density; (5)temperature; (6) dust extinction; (7) intrinsic line width; (8) equivalent width.
c c c c c c c c
Best-fit parameters of the third attempt
object z_tlac v_exp log N_HI log T log τ σ_i EW_i
km s^-1 K km s ^-1 Å
(1) (2) (3) (4) (5) (6) (7) (8)
J0021+0052 0.098535 163^+1_-2 19.17^+0.16_-0.05 4.5^+0.2_-0.1 -0.67^+0.19_-0.07 225^+1_-1 26.7^+4.0_-1.1
J0036-3333 0.020553 120^+2_-2 19.17^+0.08_-0.05 3.0^+0.1_-0.1 -0.63^+0.04_-0.04 142^+2_-2 9.0^+0.3_-0.3
J0808+3948 0.091375 348^+2_-2 16.17^+0.08_-0.05 3.7^+0.1_-0.1 -1.67^+0.46_-0.73 102^+1_-1 8.7^+0.1_-0.1
J0926+4427 0.180816 133^+1_-1 19.19^+0.07_-0.06 3.8^+0.1_-0.2 -1.73^+0.24_-0.38 244^+3_-3 29.9^+1.3_-1.2
J0938+5428 0.102247 11^+1_-2 20.63^+0.05_-0.11 4.3^+0.1_-0.2 -2.44^+0.45_-0.56 291^+5_-4 31.8^+3.9_-1.7
J0942+3547 0.015010 69^+0_-0 18.58^+0.03_-0.03 3.3^+0.0_-0.0 -3.67^+0.58_-0.73 170^+1_-1 18.4^+0.2_-0.2
J0944-0038 0.004912 70^+3_-3 19.42^+0.06_-0.08 3.9^+0.3_-0.2 -2.21^+0.54_-0.71 217^+3_-4 302.0^+23.0_-24.3
J0944+3442 0.020138 59^+9_-10 19.61^+0.11_-0.10 3.3^+0.9_-0.4 0.34^+0.23_-0.35 179^+11_-11 55.8^+18.3_-18.1
J1016+3754 0.004024 81^+2_-2 19.04^+0.13_-0.11 3.8^+0.3_-0.1 -0.58^+0.09_-0.08 148^+2_-1 21.6^+2.4_-1.3
J1024+0524 0.033304 139^+3_-3 19.05^+0.12_-0.11 4.6^+0.1_-0.1 -0.69^+0.14_-0.08 192^+2_-2 20.0^+2.5_-0.8
J1025+3622 0.126552 131^+2_-4 19.43^+0.07_-0.08 3.9^+0.3_-0.2 -1.31^+0.34_-0.35 243^+3_-3 38.2^+3.6_-2.3
J1044+0353 0.012998 158^+9_-9 18.40^+0.11_-0.13 3.4^+0.3_-0.3 -1.16^+0.44_-0.63 253^+12_-8 9.9^+0.9_-0.8
J1112+5503 0.131497 153^+3_-4 19.83^+0.06_-0.09 3.8^+0.2_-0.2 0.25^+0.08_-0.07 257^+4_-4 26.8^+2.7_-2.4
J1119+5130 0.004532 1^+1_-2 20.18^+0.10_-0.12 4.6^+0.1_-0.2 -0.91^+0.31_-0.26 225^+9_-9 13.2^+6.2_-4.6
J1144+4012 0.126862 111^+12_-7 20.19^+0.08_-0.07 4.8^+0.3_-1.0 0.48^+0.13_-0.19 252^+8_-10 324.8^+52.3_-63.3
J1148+2546 0.045233 187^+6_-5 19.66^+0.15_-0.11 5.0^+0.1_-0.2 -1.50^+0.58_-0.60 265^+7_-8 28.3^+4.7_-2.5
J1157+3220 0.011074 392^+2_-2 19.46^+0.12_-0.11 5.0^+0.1_-0.2 0.56^+0.03_-0.02 118^+1_-1 33.6^+4.8_-0.7
J1200+1343 0.066828 131^+3_-3 19.23^+0.17_-0.08 4.5^+0.2_-0.1 -0.04^+0.04_-0.04 219^+1_-1 86.3^+4.6_-3.1
J1253-0312 0.022846 131^+1_-1 19.28^+0.02_-0.04 4.2^+0.1_-0.1 -3.19^+0.70_-0.69 246^+1_-1 30.2^+0.4_-0.4
J1323-0132 0.022511 26^+1_-1 18.22^+0.06_-0.08 3.2^+0.0_-0.1 -0.95^+0.04_-0.05 112^+1_-1 87.5^+1.3_-1.2
J1359+5726 0.033938 175^+3_-2 19.30^+0.13_-0.08 4.6^+0.1_-0.2 -0.82^+0.23_-0.18 125^+2_-2 55.7^+5.5_-2.9
J1416+1223 0.123174 -5^+4_-4 19.20^+0.08_-0.08 3.6^+0.6_-0.3 0.38^+0.10_-0.10 306^+7_-8 41.7^+2.9_-2.6
J1418+2102 0.008699 21^+1_-1 19.73^+0.05_-0.02 4.1^+0.1_-0.1 -3.38^+0.58_-0.76 246^+1_-1 67.4^+2.9_-2.9
J1428+1653 0.181544 122^+2_-3 19.23^+0.05_-0.09 3.0^+0.2_-0.1 0.21^+0.07_-0.08 168^+2_-3 46.5^+3.3_-3.4
J1429+0643 0.173649 21^+2_-2 19.95^+0.09_-0.04 3.4^+0.2_-0.1 -1.30^+0.06_-0.05 321^+5_-6 41.6^+2.1_-2.1
J1521+0759 0.094335 141^+3_-4 19.40^+0.08_-0.08 5.0^+0.1_-0.1 -1.21^+0.22_-0.23 97^+4_-4 9.5^+1.1_-0.7
J1525+0757 0.075916 133^+1_-2 18.58^+0.09_-0.07 4.7^+0.1_-0.2 -1.09^+0.12_-0.11 44^+2_-2 20.1^+1.0_-1.0
J1545+0858 0.037834 92^+2_-3 19.43^+0.06_-0.06 4.2^+0.1_-0.2 -1.87^+0.34_-0.40 72^+3_-2 37.0^+1.8_-1.1
J1612+0817 0.149198 187^+4_-2 19.79^+0.08_-0.07 5.0^+0.2_-0.2 0.42^+0.03_-0.02 84^+2_-2 94.9^+5.1_-4.8
(1) object name; (2) redshift estimated by the shell model; (3) outflow velocity of expanding shell; (4) H i column density; (5)temperature; (6) dust extinction; (7) intrinsic line width; (8) equivalent width.
|
http://arxiv.org/abs/2307.05454v1 | 20230711173303 | Empowering Cross-lingual Behavioral Testing of NLP Models with Typological Features | [
"Ester Hlavnova",
"Sebastian Ruder"
] | cs.CL | [
"cs.CL",
"cs.LG"
] |
Solvable Neural Network Model for Input-Output Associations: Optimal Recall at the Onset of Chaos
Kunihiko Kaneko
August 12, 2023
=================================================================================================
A challenge towards developing NLP systems for the world's languages is understanding how they generalize to typological differences relevant for real-world applications. To this end, we propose M2C, a morphologically-aware framework for behavioral testing of NLP models. We use M2C to generate tests that probe models' behavior in light of specific linguistic features in 12 typologically diverse languages. We evaluate state-of-the-art language models on the generated tests. While models excel at most tests in English, we highlight generalization failures to specific typological characteristics such as temporal expressions in Swahili and compounding possessives in Finish. Our findings motivate the development of models that address these blind spots.[We make all code publicly available at <https://github.com/google-research/multi-morph-checklist>.]
§ INTRODUCTION
In natural language processing (NLP), there is a need to build systems that serve more of the world's approximately 6,900 languages. As one measure of linguistic diversity, the World Atlas of Language Structures <cit.> records 192 linguistic features along which languages differ. These range from the order of subject, object, and verb <cit.> to the number of basic color categories <cit.>. Languages present in existing NLP datasets mostly lie in low-density regions of the space of possible typological features <cit.>. In other words, many linguistic features that are common across the world's languages are not observed in languages that are the focus of NLP research.[For instance, while tone is present in around 80% of African languages <cit.>, few Indo-European languages can be considered tonal.]
It is thus important to investigate to which linguistic features models can generalize and where they face challenges. However, existing datasets do not allow for a fine-grained cross-lingual evaluation and mainly permit comparisons on a language level <cit.>. Prior studies
focused on syntax and grammar through the lens of acceptability judgements <cit.>. While these enable the evaluation of what a model deems `natural' in a given language, it is often unclear how such biases relate to real-world applications of NLP technology.
We propose Multilingual Morphological Checklist (M2C) to enable the investigation of a broader set of cross-lingual differences in practical scenarios. Specifically, we create a morphologically-aware behavioral testing framework <cit.>
that allows for the specification of tests in a diverse set of languages. Using this framework, we design tests that probe model's behavior in light of specific capabilities and typological features in 12 typologically diverse languages. We focus on a question answering setting as it represents one of the most general and widely useful NLP applications <cit.> and enables zero-shot evaluation of models. We create tests that cover a diverse set of reasoning capabilities involving general linguistic features that are expressed differently across languages—negation, numerals, spatial and temporal expressions, and comparatives—as well as features unique to certain languages such as time in Swahili, measure words in Chinese, compounding possessives in Finnish, and motion verbs in Russian. We evaluate state-of-the-art language models on the generated tests in zero-shot and one-shot settings. Our findings shed light on generalization failures to specific typological features. For instance, all models struggle with time expressions in Swahili and measure words in Chinese. We show the workflow of using M2C, from template creation to model evaluation, in Figure <ref>.
Our contributions are:
(1) We create a new morphologically-aware multilingual behavioral testing framework.
(2) We highlight linguistic features that are challenging in different languages.
(3) We design tests that probe model capabilities in light of practically relevant typological differences.
(4) We evaluate state-of-the-art language models on the generated tests.
(5) We shed light on the challenges posed by typological differences in multilingual scenarios.
§ RELATED WORK
Perplexity Perplexity is a standard measure of evaluating language model performance, which has also been used in multilingual settings <cit.>. Besides being difficult to compare across segmentations, perplexity does not provide more fine-grained insights regarding model behavior <cit.>. Acceptability evaluations compare perplexity between minimal pairs of grammatical and ungrammatical sentences
<cit.>. Such evaluations have been extended to other languages <cit.>, which requires writing extensive language-specific grammars while the relevance of syntax biases in real-world applications remains unclear.
Evaluation of large models Most benchmarks designed for evaluating large models focus on assessing their performance on a collection of complex tasks <cit.>. However, such benchmarks are unable to highlight more fine-grained model limitations <cit.> and are outpaced by the development of new models.
Behavioral testing Behavioral testing sheds light on model capabilities via the design of simple targeted tasks. Early work such as bAbI <cit.> focused on toy tasks requiring simple reasoning capabilities while oLMpics <cit.> consisted of 8 short classification tasks for masked language models. Recently, LMentry <cit.> provides simple tests assessing fundamental generation capabilities. A common test bed is natural language inference <cit.> where analyses of reasoning types have been extended to other languages <cit.> but require existing data.
The CheckList framework <cit.> enables the generation of behavioral tests for NLP models but its templates are English-centric. English Checklist tests have been extended to other languages via translation <cit.>. Such approaches, however, struggle with comprehensively covering linguistic features specific to a language and are not able to easily represent morphological variation. Relatedly, <cit.> create templates that integrate morphology for simple knowledge retrieval queries while <cit.> automatically translate knowledge retrieval queries into other languages. Compared to their approach, our framework allows for integrating morphology into a broader range of tests and is more scalable and flexible.
§ CHECKLIST
CheckList <cit.> relies on templates to generate a large amount of samples in order to evaluate models' behavior regarding different tasks and capabilities in a controlled manner. A template consists of a string with placeholders such as first_name delimited by curly brackets, first_name is adj. The user provides a set of values for each placeholder, for instance, first_name = {Michael, John, ... } and adj = {busy, friendly, ... }, which are used to populate the templates with their Cartesian product. The generated samples can then be applied to systematically test a model's performance in a specific setting.
Multilingual tests CheckList has been designed for English and provides mainly English-specific functionality. For example, it matches indefinite articles with nouns based on their starting letter, the placeholder a:job generates a lawyer and an engineer. As a consequence, CheckList is not capable of effectively generating tests in languages with richer morphology, which require maintaining agreement between multiple parts of the template—a feature that is beyond the scope of CheckList.
While multilingual tests can be generated by translating English tests <cit.>, optionally including template extraction and human verification, such generated templates struggle with handling rich morphology. In addition, in order to systematically probe linguistic features specific to a language, it is crucial to be able to efficiently generate in-language tests from scratch.
§ M2C FRAMEWORK
We propose the M2C (Multilingual Morphological Checklist) framework in order to enable the generation of tests in a broad set of languages, including languages with rich morphology.
A user provides a template as a string, a list of values for each placeholder, and an optional configuration dictionary in case of duplicate placeholders. The placeholder values can either be passed without inflections (for example, names in English) as a list of strings, or as a list of dictionaries with their corresponding inflected values. Each key of the dictionary is a feature combination (MASC.PL) and the value is the corresponding string (e.g. apples). As such, each entity can have multiple inflections, for instance, in English apple and apples. We show the general M2C workflow in Figure <ref>.
Morphological categories Our library follows the UniMorph Schema representation <cit.>, which decomposes morphology into 23 dimensions and over 212 features. For example, Gender is one dimension, which contains features such as Feminine (fem), Masculine (masc), and Neuter (neut).
The ability to indicate these dimensions using a clear codification allows us to describe both the value attributes given to placeholders and their dependence on one another. As an example, in order to differentiate between Juliette est grande and Julien est grand in French, it is necessary to ensure gender agreement between noun and adjective by including the Gender attribute in the template. To cover such functionality, we introduce a syntax describing the morphological dependence between placeholders: X.<Y.D> signifies that X should have the same feature for dimension D as Y. In the above example, this is realized by first_name est adj.<first_name.GENDER>.
Language-specific dimensions While initially relying on the UniMorph schema, we found cases where the existing dimensions are not sufficient to describe morphology of placeholders within the templates, which is especially necessary for dealing with exceptions. For instance, the trifold article distinction in Italian masculine gender—il treno, l'hotel, lo studente—depends on whether the noun starts with a consonant, vowel or h, or a specific consonant combination[gn, pn, ps, x, y, z, s followed by another consonant or i followed by a vowel.] respectively. In order to lexically encode such exceptions, we provide the ability to add dimensions, in this case startswith, which includes features vow, cons, and cons2. While the goal of M2C is not to be exhaustive, it should enable encoding a sufficient number of dimensions to allow the user to write templates for diverse use cases.[UniMorph defines a generic dimension `Language Specific features' with attributes lgspec1, .., lgspecn, which does not provide the clarity and flexibility of our setup.]
Advanced templating system
To cover the variety of morphological phenomena, we designed a templating system with a rich syntax. When describing dependence rules, features can be added sequentially and are commutative, <first_name.GENDER.NUMBER> is equivalent to <first_name.NUMBER.GENDER> where NUMBER = {singular, plural}. Often, only two or three output values are necessary, which directly depend on a placeholder’s feature. We allow a simple expression to be passed directly in the template to make this rule explicit:
is:first_name.SG|are:first_name.PL, which produces is for a singular first_name and are for a plural one.
Finally, we allow multiple placeholders with the same type, first_name1 and first_name2, to be populated by values of a common type, first_name. In the case of multiple placeholders,
we can provide a configuration for each placeholder type that specifies boolean repetition and order fields to, for instance, avoid having examples like John and John (repetition) or John and Mary and Mary and John (order).
Manual enumeration of features and their corresponding values is a barrier to scaling. To circumvent this, we integrate UnimorphInflect <cit.>, which uses models trained on Unimorph data using the Unimorph Schema to generate inflections in 55 languages. As Unimorph models are imperfect—test accuracies range from 90%+ in many languages to 23% in Arabic—we envision a workflow where inflections are generated at scale using UnimorphInflect and then manually inspected by annotators for correctness. We expect the increase in productivity, and thus reduction in cost, to be significant by leveraging semi-automated as opposed to manual generation for languages with good performance.[In order to ensure high-quality tests for the experiments in §<ref>, we manually enumerate all relevant inflections.]
Answer validation Most prior benchmarks for behavioral testing of language models have focused on classification tasks <cit.>. As M2C aims to support the evaluation of generative models using arbitrary templates, we implement functionality to match a range of outputs for each template, based on morphology, string matching and regex.[For each of the templates in §<ref>, we curate possible outputs and implement regex and functions capturing them.]
Summary Overall, the M2C framework enables the systematic and controlled generation of high-quality tests at scale in a broad set of languages. As such, it occupies a middle ground between libraries such as SimpleNLG <cit.> that generate high-quality data but require encoding each language-specific rule, and template expansion via generative language models <cit.>, which are highly scalable but less reliable and underperform on languages with limited data <cit.>. M2C enables modular design by allowing the addition of user-specified dimensions and features for specific templates and languages without requiring to encode all possible rules of a language. Furthermore, an advanced templating syntax and the semi-automatic generation of inflections may improve user productivity.
§ CAPABILITIES AND TYPOLOGICAL FEATURES
Languages We generate tests targeting capabilities and typological features in 12 typologically diverse languages: English (), Spanish (), Italian (), French (), German (), Swedish (), Finnish (), Slovak (), Russian (), Swahili (), Mandarin Chinese (), and Arabic ().
Recent models have excelled at a wide range of tasks in English requiring a diverse set of reasoning and understanding capabilities <cit.>. As most languages are morphologically richer than English, they encode the linguistic features representing such capabilities in more complex ways. The features we investigate are relevant in a variety of real-world applications including sentiment analysis <cit.>, question answering <cit.>, grounding <cit.>, reasoning with temporal change <cit.> and quantitative attributes <cit.>.
We investigate capabilities and linguistic features present in all our investigated languages as well as linguistic features unique to certain languages. For each feature, we highlight differences in its cross-lingual instantiation and challenges for natural language understanding and generation. We create templates using the M2C framework to test a model's understanding of each capability and feature. We show a subset in Table <ref>.
§.§ Language-agnostic features
Negation
In Indo-European languages, negation is often expressed via a separate particle such as not (English), inte (Swedish), etc.
In contrast, in Swahili, for instance, negation morphemes are fused with the verb root and thus harder to identify. For other negation terms such as kein (German) models need to produce the correct agreement when generating text. In addition to gender and number agreement with the subject, Arabic negation takes up to five forms in singular, three forms in dual, and five forms in plural, ليس> () and ليست> ().
Numerals
Models must be able to recognize and reason with numbers in their spelled-out and numerical forms across different writing and numeral systems, seventeen (English) and 17 (Western Arabic numerals) and سبعة عشر> and ٧١> (Eastern Arabic numerals).
For generation in Russian and Slovak, models must inflect the noun depending on the quantity of the object. Slovak, for instance, has separate inflections for quantities of one, two/three/four, and five and more, which also vary based on the object's animacy.
Spatial expressions
In Russian, prepositions are associated with different cases, for example the instrumental case for russianза (behind)
and the prepositional case for on. Such case agreement needs to be taken into account when generating text in Russian. Finnish, in addition to prepositions, follows a system of postpositions, which relate the location of one thing to another and require objects to be inflected in either partitive or genitive case.
Temporal expressions
Some languages with rich morphology such as Finnish and Swahili encode temporal expressions in less complex ways than their inflection-sparser counterparts. In Swahili, verbal structure follows a simple compounding schema of subject marker + tense marker + verb, e.g. a-na-soma (he reads) or u-ta-soma (you will read).
Comparatives
Commonly, comparatives are expressed by a suffix or using a quantifier, more/less. Spanish and French follow the latter approach by placing más/menos and plus/moins before the adjective with only a few standard exceptions.
On the other hand, in Finnish, for example, the formation of comparatives follows a complex system of rules for compounding that includes categories depending on the endings of adjectives and a suffix mpi.
§.§ Language-specific features
Time in Swahili In many languages, the day is divided into two periods: a.m. and p.m., with the daily cycle starting at midnight (0:00) and running through noon (12:00). In Swahili, time is based on sunset and sunrise, defined to be 6 pm and 6 am respectively in standard time. For example, 11.30 am in standard time is 5.30 in the morning in Swahili time. Understanding different time systems is key not only for in-language reasoning but also for cross-lingual applications.
Possessives in Finnish
Compounding in Finnish along with its system of 15 cases is one of the most challenging aspects of the language. One relevant feature are the possessive suffixes, which attach to the stem of nouns, koulu (school) becomes kouluni (my school) and koulumme (our school).
Possession is expressed via a suffix -lla, which compounds with other suffixes, siskollani (my sister has), which must be correctly inflected by models in order to achieve the intended meaning.
Measure words in Mandarin Chinese
Another language specific-feature are measure words in Mandarin Chinese, which include over 150 cases and are used for different types of objects depending on their characteristics, UTF8gbsn “本” for books, “双” for pairs, or “辆” for vehicles.
Motion verbs in Russian
In most Slavic languages, motion verbs are a challenging concept as they behave differently than other verb categories. While most verbs have two forms (imperfective and perfective), motion verbs have three forms: one perfective form and two imperfective forms. Of the imperfective forms, the definite form indicates unidirectional or current one-time motion while the indefinite form represents multi-directional or habitual motion.
§ EXPERIMENTS
Experimental setting We evaluate models on the generated tests in a question answering setting as can be seen in Figure <ref>. Each test consists of a context, a question, and an answer that needs to be predicted by the model. For each template, we generate 2,000 test examples on which the model is evaluated. A model's performance on a template is its accuracy of predicting a valid answer for a test averaged across all tests of the template.
We evaluate models in both zero-shot and one-shot settings for each capability and language. In the one-shot setting, a test randomly generated using the same template is used as the exemplar. This simplifies the task in two ways: i) it provides the model with a clear format for generating the answer and may enable the model to infer the answer's relationship to the rest of the template. While we conduct one-shot experiments to show the impact of additional instructions, zero-shot evaluation is the only setting that fully tests the model's understanding and generative capabilities independent of confounders such as the exemplar choice <cit.>, in line with prior work on behavioral testing <cit.>. We provide an example of both settings in Table <ref>.
Models We evaluate five state-of-the-art pre-trained language models of different sizes: an LM-adapted version <cit.> of mT5-XXL <cit.>; PaLM-S (8B parameters), PaLM-M (62B parameters), and PaLM-L <cit.>; and PaLM 2 <cit.>. All models have been trained on large amounts of web text but have not been otherwise fine-tuned for instruction-following or few-shot learning.
Generation Predictions are generated using greedy decoding with a temperature of 0 and a maximum of 20 decoding steps.
§ RESULTS
§.§ Performance across Languages
We show the average results across tests covering language-agnostic features across languages and models in Table <ref>. We present the detailed results across test types for mT5-XXL and PaLM 2 in Table <ref> and for PaLM-S, PaLM-M, and PaLM-L in Appendix <ref>. We show results on language-specific features for all models in Table <ref>.
M2C tests are challenging, particularly for smaller models and for certain languages. mT5-XXL and PaLM-S achieve comparatively poor performance on average across languages. While performance is highest for English, across the other languages both models only pass at most 50% of tests—and less than a third for Slovak (), Swahili (), and Arabic () for PaLM-S. These results highlight that the tests generated with M2C are challenging for the majority of state-of-the-art models and demonstrate that a clear gap between performance on English and performance in other languages remains for most models.
Competence with language-agnostic features emerges at scale. We observe a 20 point improvement in average performance from PaLM-S to PaLM-M to PaLM-L, highlighting that model robustness to linguistic features improves with scale. The strongest model, PaLM 2, reaches almost perfect performance on English and on the Indo-European languages. Compared to PaLM-L, PaLM 2 achieves the largest improvements on Slovak, Russian, Swahili, and Arabic. On Finnish, Slovak, Chinese, and Swahili average performance of PaLm 2 is still below 90%, however, indicating that there is headroom left in terms of competence with regard to language-agnostic features for even the strongest current models.
§.§ Performance across Linguistic Features
Language-agnostic features The most challenging test types for mT5-XXL and PaLM 2 in Table <ref> are numerals and comparatives. mT5 performs poorly on addition and only slightly better on subtraction while PaLM 2 achieves around 90% performance on most languages. On comparatives, both models have more difficulty in the conditional case. While PaLM 2 passes negation tests with almost perfect accuracy across different languages, mT5 displays reduced performance, particularly when the question is negated and for non-Indo-European languages. This highlights that robust reasoning with negation only emerges at scale. On spatial and temporal tests, mT5 achieves reasonable performance in most languages, while PaLM 2 achieves perfect performance in most cases and only underperforms in Swahili.
Language-specific features We show the results on the language-specific feature tests in Table <ref>. All models have acquired a reasonable ability to distinguish between different forms of motion verbs in Russian. Small and medium-sized models generally fail to reason with compounding possessives in Finnish and time expressions in Swahili while all models are unable to perfectly employ the correct measure words in Chinese, despite it being a high-resource language. Similarly, even PaLM 2 is unable to correctly reason with time expressions in Swahili. We show examples of errors in model predictions for each test type together with English glosses in Appendix <ref>.
§.§ Evaluating Morphological Correctness
The generated tests focus on evaluating a model's understanding capability with regard to specific capabilities and linguistic features. As the linguistic features are often expressed via morphology, we additionally calculate the fraction of errors due to morphology in the models' output for the tests with morphological variation in the answer. This enables us to assess a model's ability to generate morphologically correct forms. For instance, in Slovak, a model must generate the correct accents and suffixes, it is an error if the model predicts the Trináste (13th) instead of Trinásť (13). We automatically identify and manually curate these errors for PaLM-L and report the proportion of morphology-related errors for a subset of tests and languages in Table <ref>. We show examples of errors in model predictions that are due to morphology in Appendix <ref>.
For certain tests with morphological variation in the answer, a non-negligible fraction of errors are due to producing incorrect morphological forms. For negation in Slovak, around half of PaLM-L's errors are due to morphology such as an incorrect use of diacritics or suffixes, highlighting a weakness of subword-based models. For numerical reasoning, models frequently produce incorrectly inflected numerals. Similarly, models generate outputs with an incorrect case or number for tests related to spatial and temporal expressions and comparatives.
§.§ One-shot Evaluation
We show one-shot results for all models in Appendix <ref>. The one-shot setting generally improves results as it allows the model to infer the format of the answer and potentially its relationship to the rest of the template. Improvements are larger for smaller models, which benefit more from information about the template. Nevertheless, even in this setting models are unable to achieve perfect accuracy across all languages. Reasoning with numerals and comparatives are still challenging for most models while improvements on numerals are also relatively smaller than on other test types. Models struggle particularly in Swahili across different test types. Overall, these results demonstrate that even in one-shot settings, large language models are not able to systematically generalize to certain typological features in multilingual settings.
§ CONCLUSION
In this paper, we have introduced M2C, a multilingual morphological framework for targeted behavioral evaluation of language-specific capabilities. As world languages present different challenges, M2C aims to provide flexibility in defining a suitable templating system with its individual dimensions and features. We have conducted experiments on state-of-the-art large language models, highlighted typological features that models struggle with, and quantified errors occurring due to morphology. We hope M2C inspires further research focused on tackling typological and morphological challenges with large language models.
§ ACKNOWLEDGEMENTS
We thank Jialu Liu, Jiaming Shen, and Jonas Pfeiffer for helpful feedback on a draft of this paper.
§ BROADER IMPACT STATEMENT
Accessibility Our new behavioral testing framework enables the generation of tests that incorporate morphology, which makes the systematic and fine-grained evaluation of NLP models more accessible across a diverse set of languages. For many such languages, it was previously not feasible to gain a fine-grained understanding of a model's capabilities.
Risks Risks are limited and mainly relate to obtaining a biased view of a capability due to the use of limited templates.
Limitations The creation of templates still requires native speaker expertise and an understanding of a language's grammar. Morphological inflection models are imperfect so morphological forms may need to be enumerated to ensure high-quality tests. We leave model-in-the-loop template creation and improving morphological inflection models for future work. While we design representative templates with thousands of permutations for each capability, a larger set of templates and arguments may be necessary to ensure a comprehensive coverage.
acl_natbib
§ ZERO-SHOT RESULTS
We show zero-shot results for PaLM-S, PaLM-M, and PaLM-L across different tests and languages in Table <ref>.
§ EXAMPLES OF ERRORS ON LANGUAGE-SPECIFIC FEATURE TESTS
We show examples of errors on language-specific feature tests with PaLM-L together with English glosses in Table <ref>.
§ EXAMPLES OF MORPHOLOGICAL ERRORS
We show example errors in predictions of PaLM-L that are due to morphology in Table <ref>.
§ ONE-SHOT RESULTS
We show one-shot results for all models in Table <ref>. We show summary statistics of the average relative change in performance of the one-shot setting compared to the zero-shot setting for each language and model in Table <ref>.
|
http://arxiv.org/abs/2307.04627v1 | 20230710151340 | Irreducibility of eventually positive semigroups | [
"Sahiba Arora",
"Jochen Glück"
] | math.FA | [
"math.FA",
"math.SP",
"47B65, 47D06, 46B42, 47A10"
] |
Irreducibility of eventually positive semigroups]Irreducibility of eventually positive semigroups
[S. Arora]Sahiba Arora, Technische Universität Dresden, Institut für Analysis, Fakultät für Mathematik , 01062 Dresden, Germany
[email protected]
[J. Glück]Bergische Universität Wuppertal, Fakultät für Mathematik und Naturwissenschaften, Gaußstr. 20, 42119 Wuppertal, Germany
[email protected]
[2010]47B65, 47D06, 46B42, 47A10
Positive C_0-semigroups that occur in concrete applications are, more often than not, irreducible.
Therefore a deep and extensive theory of irreducibility has been developed that includes characterizations, perturbation analysis, and spectral results.
Many arguments from this theory, however, break down if the semigroup is only eventually positive – a property that has recently been shown to occur in numerous concrete evolution equations.
In this article, we develop new tools that also work for the eventually positive case.
The lack of positivity for small times makes it necessary to consider ideals that might only be invariant for large times.
In contrast to their classical counterparts – the invariant ideals – such eventually invariant ideals require more involved methods from the theory of operator ranges.
Using those methods we are able to characterize (an appropriate adaptation of) irreducibility by means of linear functionals, derive a perturbation result, prove a number of spectral theorems, and analyze the interaction of irreducibility with analyticity, all in the eventually positive case.
By a number of examples, we illustrate what kind of behaviour can and cannot be expected in this setting.
[
Jochen Glück
August 12, 2023
===================
§ INTRODUCTION
§.§ Eventually positive C_0-semigroups
A C_0-semigroup (e^tA)_t≥ 0 on a function space or, more generally, on a Banach lattice E is called individually eventually positive if, for each 0≤ f∈ E, there exists a time t_0≥ 0 such that e^tAf≥ 0 for all t≥ t_0. If the time t_0 can be chosen independently of f, then the semigroup is said to be uniformly eventually positive.
Since Daners showed in <cit.> that the semigroup (e^tA)_t≥ 0 generated by the Dirichlet-to-Neumann operator on the unit circle can, for suitable parameter choices, be eventually positive rather than positive, the theory of eventually positive semigroups garnered growing attention from the semigroup community. While the study of the finite-dimensional case has a somewhat longer history and can already be traced back to <cit.>, the development of the theory on general Banach lattices began with the papers <cit.> and resulted in a variety of articles:
for instance, a characterization of uniform eventual positivity is given in <cit.>, perturbation results can be found in <cit.> (we also refer to <cit.> for related finite-dimensional perturbation results), the long-term behaviour is studied in <cit.>, and the more subtle property of local eventual positivity is analyzed in <cit.>.
An overview of the state of the art as of the beginning of 2022 can be found in <cit.>.
In general, the theory incorporates two types of results in its current level of development:
* Characterizations of eventual positivity, though under rather strong technical assumptions.
* Consequences of eventual positivity.
The characterization results mentioned in (i) have already been used to show eventual positivity (and versions thereof) for a large variety of concrete PDEs, for example, the bi-Laplacian with Wentzell boundary conditions <cit.>, delay differential equations <cit.>, bi-Laplacians on graphs <cit.>,
Laplacians with point interactions <cit.>, and polyharmonic operators on graphs <cit.>.
Despite those successes, as long as we do not know how to considerably weaken the technical assumptions in those characterization theorems, there is a gap to the results of type (ii): the latter can (currently) hardly be used to show properties of those eventually positive semigroups that occur in concrete applications – rather they give us necessary conditions for eventual positivity and indicate, at the same time, the limits of the theory.
This paper is mainly about the results of the second type.
§.§ Irreducibility for positive semigroups
A C_0-semigroup on a Banach lattice E is said to be positive if each semigroup operator leaves the positive cone E_+ invariant and a positive C_0-semigroup is called irreducible if it does not leave any closed ideals invariant except for {0} and E itself.
This notion of irreducibility is motivated by the same concept in finite dimensions where it occurs, for instance, in the classical Perron–Frobenius theorem, in graph theory, and in the theory of Markov processes on finite state spaces.
In infinite dimensions, many positive semigroups that describe the solutions to concrete evolution equations are irreducible.
Detailed theoretical information about such semigroups can, for instance, be found in <cit.> and <cit.>.
A treatment of irreducibility for general semigroups of positive operators – i.e., not only one-parameter semigroups – can be found in <cit.>.
A property that makes irreducibility efficient to handle is that one can characterize it in terms of duality: a positive C_0-semigroup (e^tA)_t ≥ 0 on a Banach lattice E is irreducible if and only if for each non-zero f ∈ E_+ and each non-zero positive functional φ in the dual space E' there exists a time t ≥ 0 such that ⟨φ, e^tA f ⟩ > 0.
This observation is classical, see for instance <cit.>, but we include a rather detailed analysis of the situation in Proposition <ref>.
§.§ Contributions and organization of the article
Motivated by the recent theory and applications of eventually positive semigroups, we explore the notion of irreducibility outside the framework of positive semigroups and thereby focus on the eventually positive case.
This gives rise to two problems that cannot be solved by falling back to classical arguments from the positive case and thus require the use of different methods:
* Classically, the analysis of invariant ideals for semigroups strongly relies on the positivity of the semigroup operators.
However, as the semigroups we are interested in are only eventually positive, it becomes natural to focus on (the non-existence of) ideals that are only eventually invariant.
For individually eventually positive semigroups though, this poses additional challenges since we cannot expect any of the operators in the semigroup to be positive.
* For positive semigroups, the above-mentioned fact that irreducibility can be characterized by means of duality is very powerful.
However, the arguments used to show this characterization in the positive case cannot be directly adapted to treat the eventually positive situation.
In the light of those challenges, we make the following contributions:
Regarding (i), we distinguish between irreducible semigroups in the classical sense and what we call persistently irreducible semigroups, that rely on eventual invariance of closed ideals (see Definition <ref>).
We show this distinction is indeed necessary in the eventually positive case (Example <ref>), but not in the positive case (Proposition <ref>).
For eventually positive semigroups that are analytic, irreducibility and persistent irreducibility are also equivalent (Proposition <ref>) and, in some cases, even imply a stronger version of eventual positivity (Proposition <ref>); this is similar to the same phenomenon in the positive case <cit.>.
Regarding (ii), in order to characterize persistent irreducibility by means of duality (Theorem <ref>) a new tool is required: we first develop conditions under which non-closed vector subspaces that are individually eventually invariant are automatically uniformly eventually invariant.
This is possible in the setting of operator ranges that we discuss in Section <ref>, see in particular Corollary <ref>.
To explain the relevance of this to our framework let us point out that, while we define persistent irreducibility by means of closed ideals, non-closed ideals still play an essential role in the theory.
This is because important arguments rely on the construction of invariant principal ideals – which are typically not closed (see the proof of Lemma <ref>).
As mentioned before, positive irreducible semigroups exhibit many interesting spectral properties. Motivated by this, we study the spectrum of eventually positive persistently irreducible semigroups (Section <ref>) and show that some of these properties are carried over from the positive case. In particular, we show that the spectrum of the generator of an eventually positive persistently irreducible semigroup is guaranteed to be non-empty in some situations.
The perturbation theory of eventually positive C_0-semigroups is currently quite limited.
It was demonstrated in <cit.> that eventually positive semigroups do not behave well under (positive) perturbations.
In Section <ref> we show that if the perturbation interacts well with the unperturbed semigroup though, then the eventual positivity is indeed preserved. This enables us to construct eventually positive semigroups that do not satisfy the technical assumptions of the available characterization results in, for instance, <cit.> and <cit.>.
Consequently, we are able to give an example of an eventually positive (but non-positive) semigroup which is persistently irreducible but not eventually strongly positive (Example <ref>). Analogously to the positive case, such a situation cannot occur for analytic semigroups (Proposition <ref>).
§.§ Notation and terminology
Throughout, we freely use the theory of Banach lattices for which we refer to the standard monographs <cit.>.
All Banach spaces and Banach lattices in the article are allowed to be real or complex unless otherwise specified.
Let E be a Banach lattice with positive cone E_+. For f∈ E_+, we alternatively use the notation f≥ 0 and write f⪈ 0 if f≥ 0 but f 0.
For each u∈ E_+, the principal ideal of E given by
E_u := {f ∈ E: there exists c>0 such that f≤ cu }
is also a Banach lattice when equipped with the gauge norm
f_u:= inf{c>0: f≤ cu} (f∈ E_u).
The principal ideal E_u embeds continuously in E. In addition, if this embedding is dense, then u is called a quasi-interior point of E_+.
If E is a complex Banach lattice, we denote its real part by E_. An operator A: E ⊇A→ E is said to be real if
A=A∩ E_+iA∩ E_ and A(A∩ E_) ⊆ E_.
In particular, a bounded operator T between Banach lattices E and F is real if and only if T(E_)⊆ F_. We say that T is positive if T(E_+)⊆ F_+. In particular, every positive operator is real. Moreover, we say that T is strongly positive if Tf is a quasi-interior point of F_+ for each 0⪇ f∈ E. We denote the space of bounded linear operators between E and F by (E,F) with the shorthand (E):=(E,E). The range of T∈(E,F) is denoted by T.
A C_0-semigroup (e^tA)_t≥ 0 is said to be real if each operator e^tA is real. It can be easily seen that (e^tA)_t≥ 0 is real if and only if A is a real operator. Lastly, we recall that the spectral bound (A) of the generator A is defined as
(A):=sup{λ: λ∈(A)}∈ [-∞,∞).
§ EVENTUAL INVARIANCE OF VECTOR SUBSPACES
We introduce the notion of a persistently irreducible C_0-semigroup in Section <ref> using the concept of eventually invariant ideals. To aid us in the sequel, we collect here a few preliminary results about eventual invariance.
Let E be a Banach space and let = (T_i)_i ∈ I a net of bounded linear operators on E.
A set S ⊆ E is called…
* …invariant under if T_i S ⊆ S for each i ∈ I.
* …uniformly eventually invariant under if there exists i_0 ∈ I such that T_i S ⊆ S for each i ≥ i_0.
* …individually eventually invariant under if for each f ∈ S there exists i_0 ∈ I such that T_i f ∈ S for all i ≥ i_0.
In the subsequent sections, we will use these concepts specifically for C_0-semigroups to study irreducibility and versions thereof.
To this end it will be important to understand under which conditions an individually eventually invariant vector subspace is even uniformly eventually invariant.
For the case of closed vector subspaces, this is quite easy as long as the index set of the net of operators is not too large. Recall that a subset B of a directed set (A,≤) is said to be majorizing if for each a ∈ A, there exists b∈ B such that a≤ b.
We first consider nets of operators which eventually map into a fixed closed vector subspace;
as a corollary, we then obtain a result for eventual invariance of closed vector subspaces.
Let E,F be Banach spaces and let = (T_i)_i ∈ I a net of bounded linear operators from E to F
such that the directed set I contains a countable majorizing subset.
Let W ⊆ F be a closed vector subspace.
Assume that for every x ∈ E there exists i_0 ∈ I such that T_i x ∈ W for all i ≥ i_0.
Then i_0 can be chosen to be independent of x.
Let J ⊆ I be a countable majorizing subset.
For each j ∈ J the set
V_j
:=
{x ∈ E : T_ix ∈ W for all i ≥ j}
is a closed subspace of E and it follows from the assumption that ⋃_j ∈ J V_j = E.
Hence, by Baire's theorem there exists j_0 ∈ J such that V_j_0 = E.
Let E be a Banach space and let = (T_i)_i ∈ I be a net of bounded linear operators on E
such that the directed set I contains a countable majorizing subset.
Let V ⊆ E be a closed vector subspace.
If V is individually eventually invariant under , then it is even uniformly eventually invariant under .
Consider the restricted operators T_iV: V → E and apply Proposition <ref>.
We note that if a set S ⊆ E is uniformly eventually invariant, then so is its closure S.
However, individual eventual invariance of S does not imply individual eventual invariance of S, in general.
This is a serious obstacle in the subsequent section, especially in the proof of Theorem <ref>.
To overcome it, we need a generalisation of Corollary <ref> to the so-called operator ranges.
A vector subspace V of a Banach space E is called an operator range in E if there exists a complete norm _V on V which makes the embedding of V into E continuous.
If existent, such a norm on V is uniquely determined up to equivalence due to the closed graph theorem.
It follows easily from a quotient space argument that V is an operator range if and only if there exists a Banach space D and a bounded linear operator T: D → E with range V (this explains the terminology operator range). The concept of operator ranges was studied in depth in <cit.> and was recently used to prove an abstract characterization of the individual maximum and anti-maximum principle in <cit.> (see also <cit.>).
The intersection of finitely many operator ranges is again an operator range (the proof is easy; details can be found in <cit.> or <cit.>).
The next example shows that the intersection of infinitely many operator ranges is not an operator range, in general.
The subsequent Proposition <ref>, however, serves as a reasonable substitute.
Let Q denote the set of all quasi-interior points in the cone of the Banach lattice c_0, i.e., the set of all 0 ≤ f ∈ c_0 such that f_k > 0 for each k ∈.
For each f ∈ Q the principal ideal (c_0)_f is an operator range, being complete with respect to the gauge norm _f.
However, the intersection
⋂_f ∈ Q (c_0)_f
can easily be checked to coincide with the space c_00 of sequences with only finitely many non-zero entries, and this space cannot be an operator range since it has a countable Hamel basis (so in particular, it is not complete under any norm).
Let E be a Banach space, let I be a non-empty set, and for each i ∈ I, let V_i ⊆ E be an operator range with a complete norm _i which makes the embedding of V_i into E continuous.
Moreover, for each i ∈ I, let c_i > 0 be a real number.
Then the vector subspace
V := { v ∈⋂_i ∈ I V_i : v_V := sup_i ∈ I c_i v_i < ∞}
is complete when equipped with the norm _V and the corresponding embedding into E is continuous. In particular, V is an operator range.
It is important to note that the space (V, _V) does not only depend on the numbers c_i,
but also on the choice of the norms _i
– while for fixed i all norms on V_i that make the embedding V_i → E continuous are equivalent, we may re-scale each norm with an i-dependent factor.
For this reason, we can assume in the proof of the proposition that all factors c_i are equal to 1;
this can be achieved by simply replacing each norm _i with the norm c_i _i.
As mentioned before the proof we assume that c_i = 1 for each i.
Clearly, the normed space (V, _V) embeds continuously into (V_i, _i) for each i, so it also embeds continuously into E.
It suffices now to show that (V, _V) is complete.
To this end, let (x^n) be a Cauchy sequence in (V,_V).
Then (x^n) is bounded in (V,_V), say by a number M ≥ 0.
For each i, as (V, _V) embeds continuously into the Banach space (V_i, _i), there exists x_i∈ V_i such that x^n → x_i in (V_i, _i). Now by the continuous embedding of V_i into E, we obtain that x^n → x_i in E for each i.
It follows that x:=x_i=x_j for all i,j∈ I.
In particular, x ∈⋂_i ∈ I V_i.
To show show that x ∈ V and that (x^n) converges to x with respect to _V, let ε > 0.
Since (x^n) is a Cauchy sequence in (V, _V),
there exists n_0 such that x^m-x^n_i≤ε for all i ∈ I and all n,m ≥ n_0.
For each i ∈ I, the convergence of x^n to x in with respect to _i thus yields x^m - x_i≤ε for all m ≥ n_0.
Hence, x_i ≤x^n_0_i + ε≤ M+ε, for each i ∈ I, so x ∈ V.
Moreover, the inequality x^m - x_i≤ε for each i ∈ I and all m ≥ n_0 shows that x^m-x_V ≤ε for all m ≥ n_0, so indeed x^n → x in (V, _V).
We are now in a position to show the following generalization of Proposition <ref>.
As a consequence, we will obtain a generalization of the eventual invariance result in Corollary <ref> to operator ranges (Corollary <ref>).
Let E,F be Banach spaces and let = (T_i)_i ∈ I a net of bounded linear operators from E to F
such that the directed set I contains a countable majorizing subset.
Let W ⊆ E be an operator range, endowed with a complete norm _W that makes the inclusion map W → E continuous.
The following assertions are equivalent:
*
There exists i_0 ∈ I such that T_i E ⊆ W for each i ≥ i_0.
*
There are numbers c_i ≥ 0 for all i ∈ I that satisfy the following property:
For each x ∈ E there exists i_0 ∈ I such that T_i x ∈ W and T_i x_W ≤ c_i x_E for all i ≥ i_0.
A result which is loosely reminiscent of this theorem can be found in <cit.>.
For the proof of Theorem <ref> we need the following observation <cit.>:
if W is an operator range in a Banach space F, and T: E → F is a bounded linear operator from another Banach space E into F, then the pre-image V := T^-1(W) is an operator range in E, as the norm _V given by
x_V := x_E + Tx_W
for all x ∈ V is complete and makes the embedding of V into E continuous.
Let W⊆ E be an operator range, endowed with a complete norm _W that makes the embedding into E continuous.
“(i) ⇒ (ii)”: This implication immediately follows by choosing c_i:= T_i_E→ W – which is finite by closed graph theorem – for i≥ i_0 and c_i:=0 for all other i.
“(ii) ⇒ (i)”: Without loss of generality, we assume that c_i ≥ 1 for each i∈ I. Let J⊆ I be a countable majorizing set and for each j∈ J, let
V_j := ⋂_i≥ j T_i^-1(W).
Since W is an operator range in F, T_i^-1(W) is an operator range in E as it is complete and embeds continuously into E when endowed with the norm x_i:= x_E+T_ix_W on T_i^-1(W).
Thus, by Proposition <ref>, the subspace
Ṽ_̃j̃:= {x∈ V_j : x_Ṽ_̃j̃ := sup_i≥ j c_i^-1x_i <∞}
of E is also an operator range.
Next, we observe that ⋃_j∈ JṼ_̃j̃ = E.
Indeed, let x∈ E. As J is majorizing we can, due to assumption (ii), choose j∈ J such that for each i≥ j, we have T_ix∈ W and T_i x_W ≤ c_i x_E.
Hence, for all i ≥ j, one has x_i ≤ (1+c_i) x_E and thus
c_i^-1x_i ≤ 2 x_E,
as we chose each c_i to be at least 1. p
It follows that x∈Ṽ_̃j̃, as desired.
Since ⋃_j∈ JṼ_̃j̃ = E, we can apply Baire's theorem, for operator ranges <cit.> to conclude that there exists j∈ J such that Ṽ_̃j̃=E.
In turn, V_j = E and thus T_i E ⊆ W for all i≥ j.
Let E be a Banach space and let = (T_i)_i ∈ I a net of bounded linear operators on E
such that the directed set I contains a countable majorizing set.
For each operator range V in E the following assertions are equivalent:
*
The space V is uniformly eventually invariant under T.
*
There are numbers c_i ≥ 0 for all i ∈ I such that the space V satisfies the following quantified individual eventual invariance property:
For each v ∈ V there exists i_0 ∈ I such that T_i v ∈ V and T_i v_V ≤ c_i v_V for all i ≥ i_0.
Apply Theorem <ref> to the restricted operators T_iV : V → E.
In the context of Theorem <ref> and Corollary <ref> it is worthwhile to mention the general question under which conditions individual properties of operator-valued functions are automatically uniform.
An abstract framework to study this question was developed by Peruzzetto in <cit.>.
§ IRREDUCIBILITY OF C_0-SEMIGROUPS
In this section, we study irreduciblity of C_0-semigroups, a notion which stems from the theory of positive semigroups.
We drop the positivity assumption and first discuss a variety of sufficient conditions for the irreducibility of general C_0-semigroups (Proposition <ref>); those conditions are necessary in the positive case (Proposition <ref>).
In the eventually positive case, things turn out to be more involved (Example <ref>) and the stronger notion of persistent irreducibility turns out to be fruitful for the analysis, see in particular Theorem <ref>.
Note that, as [0,∞) contains the majorizing set , individual and uniform eventual invariance of a closed subspace under a semigroup are equivalent notions (Corollary <ref>).
A semigroup (e^tA)_t ≥ 0 on a Banach lattice E is called…
* …irreducible if it has no closed invariant ideals, except for {0} and E.
* …persistently irreducible if it has no closed ideal that is uniformly (equivalently: individually) eventually invariant, except for {0} and E.
Observe that both irreducibility and persistent irreducibility do not change if we replace A with A + λ for any λ∈.
The name persistently irreducible is motivated by the observation that this property means that all the semigroup tails (e^tA)_t ≥ t_0 for t_0 ≥ 0 act irreducibly on E, i.e., the semigroup is not only irreducible but it also remains irreducible when its action is only considered for large times.
Obviously, persistent irreducibility implies irreducibility
(which also explains why we avoid the terminology eventually irreducible, a notion that one might, at first glance, be tempted to use instead of persistently irreducible).
From the definition, it is seen at once that nilpotent semigroups are not persistently irreducible unless E≤1. In Example <ref>, we give an example of a semigroup which is nilpotent and irreducible – thus, showing that the notions irreducibility and persistent irreducibility are not equivalent.
Throughout we will study a number of sufficient or necessary conditions for irreducibility; they are motivated by <cit.>, where positive semigroups are studied.
Let (e^tA)_t≥ 0 be a C_0-semigroup on a Banach lattice E. We study the relationships between the following assertions.
*
The semigroup (e^tA)_t ≥ 0 is irreducible.
*
Weak condition at arbitrary times:
For each 0 ⪇ f ∈ E and each 0 ⪇φ∈ E', there exists t ∈ [0,∞) such that ⟨φ, e^tAf ⟩≠ 0.
*
The semigroup (e^tA)_t ≥ 0 is persistently irreducible.
*
Weak condition at large times or 0:
For each 0 ⪇ f ∈ E, each 0 ⪇φ∈ E' and each t_0 ∈ [0,∞), there exists t ∈{0}∪ [t_0,∞) such that ⟨φ, e^tAf ⟩ 0.
*
Weak condition at large times:
For each 0 ⪇ f ∈ E, each 0 ⪇φ∈ E' and each t_0 ∈ [0,∞), there exists t ∈ [t_0,∞) such that ⟨φ, e^tAf ⟩ 0.
Let us first point out that several implications between these conditions are true for general C_0-semigroups:
For a C_0-semigroup (e^tA)_t≥ 0 on a Banach lattice E, the following implications between the Conditions <ref> are true:
3cm
<ref>
persistent irreducibility
@=>[d]
3cm
<ref>
weak condition at large times or 0
@=>[l]
3cm
<ref>
weak condition at large times
@=>[l]
@=>[d]
3cm
<ref>
irreducibility
3cm
<ref>
weak condition at arbitrary times
@=>[ll]
The implications
“<ref> ⇒ <ref>”
and
“<ref> ⇐ <ref> ⇒ <ref>”
are obvious.
“<ref> ⇒ <ref>”:
If <ref> fails, we can find a closed ideal I which is neither equal to {0} nor to E, but which is invariant under the semigroup.
As I is proper and non-zero, there exists a vector 0 ⪇ f ∈ I and a functional 0 ⪇φ∈ E' that vanishes on I.
One has ⟨φ, e^tA f ⟩ = 0 for all t ≥ 0, so <ref> fails.
“<ref> ⇒ <ref>”:
If <ref> fails, we can find a closed ideal I which is neither equal to {0} nor to E and a time t_0 ≥ 0 such that e^tAI ⊆ I for all t ∈ [t_0,∞).
Again, as I is proper and non-zero, there is 0 ⪇ f ∈ I and a functional 0 ⪇φ∈ E' which vanishes on I.
Hence, we have ⟨φ, e^tA f ⟩ = 0 for all t ∈{0}∪ [t_0,∞), which shows that <ref> fails.
Proposition <ref> gives us a plethora of examples of (non-positive) semigroups that are (persistently) irreducible. Indeed, we see that persistent irreducibility is implied by individual eventual strong positivity, i.e., by the property
∀ f⪈ 0 ∃ t_0≥ 0 ∀ t≥ t_0: e^tAf is a quasi-interior point of E_+;
this follows from the fact that a vector g ∈ E_+ is a quasi-interior point of E_+ if and only if ⟨φ, g ⟩ > 0 for all 0 ⪇φ∈ E', see <cit.>. Therefore, all examples of eventually strongly positive semigroups in <cit.> and <cit.> are persistently irreducible. In Example <ref>, we give an example of a non-positive persistently irreducible semigroup that does not satisfy (<ref>).
For positive semigroups, the notions of irreduciblity and persistent irreduciblity are, in fact, equivalent:
For a positive C_0-semigroup (e^tA)_t≥ 0 on a Banach lattice E, all five of the Conditions <ref> are equivalent.
By Proposition <ref> it suffices to prove the following two implications:
“<ref> ⇒ <ref>”:
This implication is well-known for positive semigroups.
The argument can be found just after <cit.> (see also <cit.>).
“<ref> ⇒ <ref>”:
Since <ref> holds, we know from Proposition <ref> that the semigroup is also irreducible.
The irreducibility together with the positivity implies that each operator e^tA is strictly positive, meaning that e^tAf ⪈ 0 whenever f ⪈ 0; see <cit.>.
So if f, φ, and t_0 are given as in <ref>, then e^t_0 Af ⪈ 0.
Again applying <ref> to the vectors e^t_0 Af and φ and thus find a time t ≥ 0 such that ⟨φ, e^(t + t_0)Af ⟩ 0.
The situation gets more subtle for eventually positive semigroups.
For them, the following theorem indicates that persistent irreducibility is the appropriate notion to further build the theory on, as this property can be conveniently characterized by testing against functionals.
For an individually eventually positive real C_0-semigroup (e^tA)_t≥ 0 on a Banach lattice E, the following implications between the Conditions <ref> are true:
3cm
<ref>
persistent irreducibility
@=>[d]
3cm
<ref>
weak condition at large times or 0
@<=>[l]
3cm
<ref>
weak condition at large times
@<=>[l]
@=>[d]
3cm
<ref>
irreducibility
3cm
<ref>
weak condition at arbitrary times
@=>[ll]
The difference to the situation without any eventual positivity assumption (Proposition <ref>) is that one now has equivalences throughout the first row of the diagram.
In light of the situation for positive semigroups (Proposition <ref>) one may ask whether Theorem <ref> can be improved to get an equivalence between all five conditions, say at least for uniformly eventually positive semigroups. A negative answer is given in Example <ref>. However, the situation improves significantly if the semigroup is, in addition, analytic (Proposition <ref>).
For the proof of Theorem <ref>, we need the following sufficient condition for a principal ideal in a Banach lattice to be uniformly eventually invariant under a semigroup.
Getting uniform (rather than only individual) eventual invariance in the lemma is a bit subtle, as we merely assume the semigroup to be individually eventually positive.
This is where our preparations on eventually invariant operator ranges from Section <ref> enter the game.
Let E be a Banach lattice and let (e^tA)_t ≥ 0 be a real individually eventually positive C_0-semigroup on E.
Let 0 ≤ h ∈ E and assume that there exists a time t_0 ≥ 0 such that e^tA h ≤ h for all t ≥ t_0.
Then the principal ideal E_h (and, in turn, its closure) is uniformly eventually invariant under (e^tA)_t ≥ 0.
We first consider a real vector f in the order interval [-h, h].
Due to the individual eventual positivity of the semigroup there exists a time t_1 ≥ t_0
such that e^tA(h-f) ≥ 0 and e^tA(h+f) ≥ 0 for all t ≥ t_1.
Thus, for all t ≥ t_1 ≥ t_0, the vector e^tAf is real and satisfies
± e^tAf ≤ e^tAh ≤ h,
so e^tAf ∈ [-h,h].
This proves that the order interval [-h,h] is individually eventually invariant under the semigroup.
As [-h,h] spans E_h (over the underlying scalar field),
it follows that E_h is individually eventually invariant under the semigroup.
To show that E_h is even uniformly eventually invariant, we will now employ Corollary <ref>.
If the underlying scalar field is , the preceding argument shows that, for each f ∈ E_h,
we have e^tA f_h ≤f_h for all sufficiently large times t.
If the underlying scalar field is and f ∈ E_h, say with gauge norm f_h ≤ 1,
we can write f as f = f_1 + i f_2 for real vectors f_1,f_2 ∈ [-h,h] and hence,
e^tAf = e^tAf_1 + e^tAf_2 has modulus at most 2h for all sufficiently large t.
This shows that, for each f ∈ E_h, one has e^tAf_h ≤ 2f_h for all sufficiently large times t.
In both cases, Corollary <ref> can be applied and give the uniform eventual invariance of E_h.
Due to Proposition <ref>, only one implication is left to prove, namely
“<ref> ⇒ <ref>”.
Without loss of generality, assume that the growth bound (A) of the semigroup satisfies (A) < 0.
Assume that <ref> fails, i.e., we can find 0 ⪇ f ∈ E, 0 ⪇φ∈ E' and t_0 ∈ [0,∞) such that ⟨φ, e^tAf ⟩ = 0 for all t ∈ [t_0,∞).
Due to the individual eventual positivity of the semigroup, we can find a time t_1 ≥ t_0 such that e^tAf ≥ 0 for each t ≥ t_1.
We distinguish two cases:
Case 1: e^t_1Af = 0.
It then follows from the individual eventual positivity that the orbit of every g ∈ E_f under the semigroup eventually vanishes.
By applying Proposition <ref> to the family of operators (e^tAE_f)_t≥ 0 and W=E_f, we conclude that
e^tA eventually vanishes on E_f.
In particular, the closed ideal E_f is even uniformly eventually invariant. Thus, if E_f≠ E, then <ref> fails (as f is non-zero).
On the other hand, if E_f = E, then the semigroup is nilpotent.
But since <ref> is not true, one has E>1. Clubbing this with nilpotency, the semigroup can't be persistently irreducible, i.e., <ref> fails.
Case 2: e^t_1Af ≠ 0.
By the continuity of the semigroup orbit of f and the inequality e^tAf ≥ 0 for t ≥ t_1, it follows (by testing against positive functionals) that
h := ∫_t_1^∞ e^tA f t ⪈ 0;
the convergence of the integral is guaranteed as we assumed ω_0(A) < 0.
By using again that e^tAf ≥ 0 for all t ≥ t_1, one readily checks that e^tAh ≤ h for all t ≥ 0.
So, according to Lemma <ref>, the closure of the ideal E_h is uniformly eventually invariant under the semigroup.
This closed ideal is non-zero since it contains that non-zero vector h.
Moreover, we have ⟨φ, h ⟩ = 0, so φ vanishes on E_h, which shows that E_h≠ E.
Whence, the semigroup is not persistently irreducible, i.e., <ref> fails.
As a consequence of Theorem <ref>, we obtain the following eventual strict positivity result for eventually positive persistently irreducible semigroups:
Let E be a Banach lattice and assume that (e^tA)_t ≥ 0 is a real C_0-semigroup on E which is individually eventually positive and persistently irreducible.
For all 0 ⪇ f ∈ E and 0 ⪇φ∈ E' and all times t ≥ 0 one has e^tAf ≠ 0 and (e^tA)' φ≠ 0.
We infer from Theorem <ref> that the semigroup satisfies Condition <ref><ref>.
Let 0 ⪇ f ∈ E and 0 ⪇φ∈ E'. Suppose there exists t_0>0 such that e^t_0 Af=0 or (e^t_0A)' φ = 0. Then e^tAf=0 or (e^tA)' φ = 0 for all t≥ t_0. In either case, φe^tAf=0 for all t≥ t_0, which contradicts Condition <ref><ref>.
Using Corollary <ref>, we are able to obtain the following analogue of <cit.>:
If (e^tA)_t ≥ 0 is a C_0-semigroup on E which is uniformly eventually positive and persistently irreducible, then there exists a time t_0≥ 0 such that e^tA is a strictly positive operator for all t≥ t_0, meaning that e^tAf⪈ 0 for all 0⪇ f∈ E and all t≥ t_0.
There exist uniformly eventually positive C_0-semigroups
which are irreducible but not persistently irreducible;
here is a concrete example.
A uniformly eventually positive semigroup on ℓ^2 which is irreducible but not persistently irreducible.
Let (r_n) be the orthonormal basis of L^2(0,1) that consists of the Rademacher functions and let U:L^2(0,1)→ℓ^2 be the unitary operator that maps each function to its coefficients with respect to the basis (r_n).
Let (e^tB)_t≥ 0 denote the left shift semigroup on L^2(0,1) (which is nilpotent).
We show that the semigroup on ℓ^2 given by e^tA=Ue^tBU^-1 for each t ≥ 0 is irreducible.
However, the semigroup is clearly not persistently irreducible as it is nilpotent.
In order to show that (e^tA)_t≥ 0 is irreducible, let I be a non-zero closed ideal of ℓ^2 that is invariant under the semigroup (e^tA)_t≥ 0. Then there exists k∈ such that e_k∈ I; here (e_n) denotes the standard orthonormal basis of ℓ^2.
For every index j k we have
e^tAe_ke_j = e^tBr_kr_j;
and the term on the right is non-zero for some t∈ [0,1] (for instance, for all t<1 that are sufficiently close to 1).
So for this time t the vector e^tAe_k dominates a non-zero multiple of e_j.
As e^tAe_k is an I, so is e_j.
Thus, I=ℓ^2.
While the above counterexample shows that Condition <ref><ref> does not imply <ref> even in the case of uniformly eventually positive semigroups, we do not know whether the semigroup in the example satisfies <ref>.
So it remains open whether any of the implications “<ref> ⇒ <ref>” or “<ref> ⇒ <ref>” is true for (individually or uniformly) eventually positive semigroups
(but note that they cannot both be true, as Example <ref> shows).
Finally, let us briefly consider the case of analytic semigroups.
This case is simpler since a phenomenon as in Example <ref> cannot occur:
Let E be complex Banach lattice and assume that (e^tA)_t ≥ 0 is an individually eventually positive C_0-semigroup on E.
If the semigroup is analytic, then all five Conditions <ref> are equivalent.
The remaining implication “<ref> ⇒ <ref>” in Theorem <ref> follows from the identity theorem for analytic functions.
For positive semigroups, irreducibility together with analyticity implies a stronger version of positivity, namely that the semigroup operators map all vectors f ⪈ 0 to quasi-interior points <cit.>.
In the following proposition, we slightly modify the argument to show that the same remains true if the semigroup is only uniformly eventually positive rather than positive.
We do not know whether individual eventual positivity suffices for the same conclusion.
Let E be a complex Banach lattice and assume that (e^tA)_t ≥ 0 is a uniformly eventually positive analytic C_0-semigroup on E and choose t_0 ∈ [0,∞) such that e^tA≥ 0 for all t ≥ t_0.
If (e^tA)_t ≥ 0 is (persistently) irreducible, then for every 0⪇ f∈ E and all t > 2 t_0 the vector e^tAf is a quasi-interior point of E_+.
In the light of the terminology of earlier papers on eventual positivity such as <cit.> it is natural to refer to the property in the conclusion of the proposition as uniform eventual strong positivity of (e^tA)_t ≥ 0, cf. (<ref>).
In <cit.> this property was called eventual irreducibility.
We first make the following preliminary observation:
(*)
If the orbit of a vector g ∈ E_+ is positive (meaning that e^tAg ≥ 0 for all t ≥ 0) and (t_n) ⊆ (0,∞) converges to 0 sufficiently fast, then we can find an increasing sequence (g_n) ∈ E_+ converging to g, that satisfies 0 ≤ g_n ≤ e^t_n A g for each index n.
Indeed, let (t_n) converge to 0 so fast that ∑_n=1^∞e^t_nAg - g < ∞.
Then we define the (not yet positive) vectors
g_n
:=
g - ∑_k=n^∞(g - e^t_k Ag )^+
for each n ∈, where the series converges absolutely in E.
Clearly, (g_n) is an increasing sequence of real vectors in E that converges to g.
For each index n one has g_n ≤ g - (g - e^t_n Ag )^+ = g e^t_nA g ≤ e^t_nAg.
Since all the vectors e^t_nAg are positive we can now replace each g_n with g_n^+ to obtain a sequence (g_n) with the desired properties.
This proves (*).
Assume now that the conclusion of the proposition does not hold, i.e., that there exists a vector 0 ⪇ f ∈ E and a time τ > 2t_0 such that e^τ A is not a quasi-interior point of E_+.
By the characterization of quasi-interior points in <cit.>), there is a functional 0 ⪇φ∈ E' such that ⟨φ, e^τ A f ⟩ = 0.
The orbit of the vector g := e^t_0 A f ∈ E_+ is positive, so we can apply the preliminary observation (*) to g.
Let (t_n) and (g_n) be as given by (*).
By dropping finitely many elements of these sequences we can achieve that τ - t_n ≥ 2t_0 for all n and hence, all the operators e^(τ-t_0-t_n)A are positive.
For all integers n ≥ m ≥ 1 we thus have
0
≤
e^(τ-t_0-t_n)A g_m
≤
e^(τ-t_0-t_n)A g_n
≤
e^(τ-t_0)A g
=
e^τ Af
and thus, ⟨φ, e^(τ-t_0-t_n)A g_m ⟩ = 0.
As the sequence (τ-t_0-t_n)_n ≥ m accumulates at the point τ-t_0 ∈ (0,∞), analyticity of the semigroup implies that ⟨φ, e^tA g_m ⟩ = 0 for all t ∈ [0,∞) and all m ∈. As g_m → g, we even have
0 = ⟨φ, e^tA g ⟩ = ⟨φ, e^(t_0+t)Af ⟩ for all t ∈ [0,∞).
According to Theorem <ref> this contradicts the persistent irreducibility.
In Example <ref>, we show that the assumption of analyticity cannot be dropped in Proposition <ref>.
§ SPECTRAL PROPERTIES OF PERSISTENTLY IRREDUCIBLE SEMIGROUPS
In this section, we study spectral properties of eventually positive semigroups that are persistently irreducible.
For the case of positive irreducible semigroups, most of these properties are proved in <cit.>.
It is instructive to observe that the conclusions of several results in this section resemble similar properties that were shown under stronger conditions in <cit.> (the conclusions are formulated somewhat differently in <cit.>, but can be rephrased in terms of leading eigenvectors, see <cit.>).
Recall that a linear positive functional φ on a Banach lattice is said to be strictly positive if its kernel contains no positive non-zero element.
Let E be a complex Banach lattice and let (e^tA)_t ≥ 0 be an individually eventually positive and persistently irreducible C_0-semigroup on E.
* If 0 ⪇ u ∈ E is an eigenvector of A for an eigenvalue λ∈, then u is a quasi-interior point of E_+.
* If 0 ⪇ψ∈ E' is an eigenvector of A' for an eigenvalue λ∈, then ψ is strictly positive.
(a)
Let 0 ⪇φ∈ E'_+.
According to Theorem <ref>, Condition <ref><ref> is satisfied, so we can find t ∈ [0,∞) such that
0 < ⟨φ, e^tA u ⟩ = ⟨φ, e^t λ u ⟩ = e^t λ⟨φ, u ⟩,
so ⟨φ, u ⟩ > 0.
This shows that u is a quasi-interior point <cit.>.
(b)
This follows from a similar argument as (a).
For the following theorem, recall that an eigenvalue λ of a linear operator A: E ⊇A→ E on a Banach space E is called geometrically simple if the eigenspace (λ-A) is one-dimensional.
It is called algebraically simple if the generalized eigenspace ⋃_n ∈( (λ - A)^n ) is one-dimensional.
Let (e^tA)_t ≥ 0 be real, individually eventually positive, and persistently irreducible C_0-semigroup on a complex Banach lattice E.
If λ∈ is an eigenvalue of both A and A' and (λ - A') contains a positive non-zero functional, then λ is algebraically simple as an eigenvalue of A, the eigenspace (λ - A) is spanned by a quasi-interior point of E_+, and (λ - A') contains a strictly positive functional.
For a proof of Theorem <ref>, we need the following general result:
Let A: X ⊇A→ X be a closed and densely defined linear operator on a complex Banach space E and let λ∈ be an eigenvalue of both A and A'.
If λ is geometrically simple as an eigenvalue of A and there exist eigenvectors u ∈(λ - A) and φ∈(λ - A') such that ⟨φ, u ⟩≠ 0, then λ is also algebraically simple as an eigenvalue of A.
It seems likely that Lemma <ref> is known to experts in spectral theory, although we could not find an explicit reference for it.
For matrices, the lemma is implicitly shown in the proof of <cit.>, whereas the infinite-dimensional version is implicitly shown in the proof of (iii) implies (iv) of <cit.>.
Without loss of generality, we may assume that λ = 0.
Let v ∈ A^2; it suffices to prove that v ∈ A.
Since Av ∈ A and A is spanned by u, there exists a scalar α such that Av = α u.
By testing this equality against φ and using that A'φ = 0, we obtain
0 = ⟨φ, Av ⟩ = α⟨φ, u ⟩.
Since ⟨φ, u ⟩≠ 0, this implies that α = 0, so Av = 0.
There is no loss of generality in assuming that λ = 0.
Let 0 ⪇φ∈(A').
According to Proposition <ref>, φ is strictly positive and all positive non-zero elements of (A) are quasi-interior points of E_+.
Now we show that the real part E_∩(A) of (A) is a sublattice of E_.
Let f ∈ E_∩(A); it suffices to show that f∈(A).
Due to the individual eventual positivity of the semigroup, there exists t_0 ≥ 0 such that
f = e^tAf≤ e^tAf
for all t ≥ t_0; this is a general property of individually eventually positive operator nets, see <cit.>.
For t ≥ t_0 the vector e^tAf - f is therefore positive;
but it is also in the kernel of φ, and hence equal to 0 due to the strict positivity of φ.
Thus, e^tAf = f for all t ≥ t_0.
For general t ≥ 0 this implies
e^tAf = e^tA e^t_0 Af = e^(t+t_0)Af = f,
so f∈ A, as claimed.
Now we can show that A ∩ E_ is one-dimensional and spanned by a quasi-interior point of E_+.
We have just seen that A ∩ E_ is a closed sublattice of the real Banach lattice E_.
Since every non-zero positive vector in A ∩ E_ is a quasi-interior point within the Banach lattice E_, it is also a quasi-interior point within the Banach lattice A ∩ E_, see <cit.>.
Hence, A ∩ E_ is at most one-dimensional <cit.>.
Since A is real and A is non-zero, we conclude that A ∩ E_ is one-dimensional and thus spanned by a vector u ≠ 0.
The modulus u is also a non-zero vector in A ∩ E_ and thus spans this space, too.
By what we have noted at the beginning of the proof, u is a quasi-interior point of E_+.
The fact that A is real implies that A is also spanned by u (over ).
Finally, we note that the eigenvalue λ = 0 of A is algebraically simple.
This follows from the geometric simplicity and ⟨φ, u ⟩ > 0 by means of Lemma <ref>.
In the following corollary, we list a few simple consequences of Theorem <ref>.
If E is a Banach space, then for each u∈ E and φ∈ E', the (at most) rank-one operator u⊗φ is defined as
u ⊗φ:
E → E,
f ↦φfu.
Let E be a complex Banach lattice and let (e^tA)_t ≥ 0 be a real, individually eventually positive, and persistently irreducible semigroup on E.
* Assume that the spectral bound (A) is not -∞ and is an eigenvalue of A.
If the operator family ( (λ - (A)) (λ,A) )_λ > (A) is bounded in some right neighbourhood of (A),
then (A) is an algebraically simply eigenvalue of A, the eigenspace ((A) - A) is spanned by a quasi-interior point of E_+, and the dual eigenspace ((A) - A') contains a strictly positive functional.
* If the semigroup is mean ergodic and the mean ergodic projection P is non-zero, then P = u ⊗φ for a quasi-interior point u of E_+ and a strictly positive functional φ∈ E'.
* If (A) is not -∞ and is a pole of the resolvent, then the pole is simple and the corresponding spectral projection P is given by P=u⊗φ for a quasi-interior point u of E_+ and a strictly positive functional φ∈ E'.
(a)
We may assume that (A) = 0.
According to Theorem <ref> it suffices to show that A' contains a non-zero positive element.
Due to the boundedness assumption on the resolvent, A' contains a non-zero element φ <cit.>.
The proof in this reference also shows how φ can be obtained:
let u ∈ A be non-zero and choose an abritrary functional ψ∈ E' such that ⟨ψ, u ⟩≠ 0.
Then the net ( λ(λ,A)'ψ)_λ∈ (0,∞) – where (0,∞) is ordered conversely to the order inherited from –
has a weak^*-convergent subnet by the Banach–Alaoglu theorem and the limit φ of this subnet is non-zero and an element of A'.
In this argument, we may choose the initial functional ψ to be positive.
The individual eventual positivity of the semigroup implies that, for each 0≤ f∈ E, the distance of λ(λ,A)f to E_+ converges to 0 as t →∞ <cit.>.
It follows from the positivity of ψ that φ is positive, as well.
(b)
If (e^tA)_t ≥ 0 is mean ergodic, the mean ergodic projection P is the projection onto A along A and A separates A'; see <cit.>.
The same reference also shows that (0,∞) is in the resolvent set of A and that (λ(λ, A))_λ∈ (0,∞) is bounded in a right neighbourhood of 0.
Since P = A and P was assumed to be non-zero, A contains a non-zero element.
So (A) = 0 and we can apply (a) to conclude that A is spanned by a quasi-interior point u of E_+ and that A' contains a strictly positive functional φ.
Being a projection, P satisfies P' = P = 1 <cit.>.
Moreover, as P' is the weak^*-limit of the dual operators of the Cesàro means of (e^tA)_t ≥ 0, its range P' contains A' and in particular, φ, so P' is actually spanned by φ.
Re-scaling φ and u such that φu=1, we deduce that P = u ⊗φ.
(c)
Since (A) is a pole of the resolvent (,A), it is an eigenvalue of both A and A' and the corresponding eigenspaces each contain a positive, non-zero vector; see <cit.>. Therefore, by Theorem <ref>, the spectral bound (A) is an algebraically simple eigenvalue of A, the eigenspace ((A)-A) is spanned by a quasi-interior point u of E_+, and ((A)-A') contains a strictly positive functional φ.
It follows from the algebraic simplicity that the pole (A) of the resolvent of A is simple, see for instance, <cit.>. In particular, P=((A)-A) and P'=((A)-A').
Since P and P' have the same dimension <cit.>, the image P' is spanned by φ.
By re-scaling φ and u such that φu=1, we get P = u ⊗φ.
After the preceding results on eigenvectors we discuss in Theorem <ref> below that, on spaces of continuous functions, persistent irreducibility gives that the spectrum of the semigroup generator is non-empty.
For positive semigroups, this is known <cit.>, but the proof given in this reference does not directly extend to the eventually positive case since the resolvent of an eventually positive semigroup need not be positive at any point.
Nevertheless, as in the positive case, an essential ingredient for the proof is the observation in the following proposition.
We say that a bounded linear operator T on a Banach lattice E has individually eventually positive powers if for each f ∈ E_+, there exists an integer n_0 ≥ 0 such that T^nf ≥ 0 for each n ≥ n_0.
Let T be a bounded linear operator on a Banach lattice E and assume that T has individually eventually positive powers.
Assume that there is a vector h ∈ E_+ and a number δ > 0 such that Th ≥δ h and T^n h ≠ 0 for each n ∈ (so in particular, h is non-zero).
Then (T) ≥δ.
Before the proof it is worthwhile to observe that the conclusion is very easy to see if T is positive: for each n ∈ one then has T^n h ≥δ^n h and thus T^n≥δ^n as h ≠ 0.
In the eventually positive case one can argue as follows:
We may assume that h = 1.
Since Th - δ h is positive, there exists n_0 ≥ 0 such that T^n+1 h ≥δ T^n h ≥ 0 for each n ≥ n_0.
For every k ≥ 0 this yields
T^n_0+k h
≥δ T^n_0+k-1h
≥δ^2 T^n_0+k-2 h
≥…≥δ^k T^n_0 h.
Hence,
T^n_0+k≥T^n_0+kh≥δ^k α,
where α := T^n_0 h is non-zero by assumption.
So
(T)
=
lim_k →∞T^n_0+k^1/n_0+k≥lim_k →∞δ^k/n_0+kα^1/n_0+k
=
δ,
which proves the proposition.
Before we use this proposition to study persistently irreducible semigroups on spaces C_0(L) for locally compact L, we mention the following simple consequence on C(K)-spaces for compact K (i.e., abstractly speaking, on AM-spaces with order unit), which holds without any irreducibility assumption.
This generalizes <cit.>,
where the semigroup was assumed to be uniformly eventually strongly positive rather than just individually eventually positive.
Let (e^tA)_t ≥ 0 be a real and individually eventually positive C_0-semigroup on an AM-space E with order unit.
If the semigroup is not nilpotent, then (A) is non-empty.
Let be a (strong) order unit of E.
Due to the strong continuity at time 0 and since the semigroup is real, there exists s > 0 such that e^sA≥1/2.
Moreover, one has e^tA≠ 0 for any t ∈ [0,∞).
Indeed, assume the contrary.
Then e^tA = 0 for all sufficiently large t, say t ≥ t_0.
For every real vector f ∈ E between 0 and one has thus 0 ≤ e^tAf ≤ e^tA for all sufficiently large t due to the individual eventual positivity.
Hence, each orbit of the semigroup vanishes eventually.
By Proposition <ref> this implies that the semigroup is nilpotent, which contradicts our assumption.
As the powers of the operator e^sA are individually eventually positive, we can apply Proposition <ref> to conclude that the spectral radius of e^sA is non-zero.
Thus, the growth bound of the semigroup is not -∞.
But for individually eventually positive semigroups on AM-spaces with unit it was shown in <cit.> that the growth bound coincides with the spectral bound.
So the spectrum is indeed non-empty.
The assumption that the semigroup not be nilpotent is not redundant in the corollary:
there exist real C_0-semigroup on C([0,1]) that are nilpotent (see for instance <cit.>). Obviously, such a semigroup is (even uniformly) eventually positive and yet has empty spectrum.
On the other hand, this cannot happen for a positive C_0-semigroup on C(K): for those, it is a classical result that the spectrum of the generator is always non-empty <cit.>).
On the space C_0(L) of continuous function on a locally compact Hausdorff space L that vanish at infinity, it can happen that a positive semigroup is not nilpotent and still its generator has empty spectrum; see <cit.> for a concrete example where this occurs.
However, if the semigroup is also irreducible, then it follows again that the generator has non-empty spectrum <cit.>.
The following theorem generalizes this result to individually eventually positive semigroups.
Since the relation between individual eventual positivity of the semigroup and the resolvent operators is much subtler than in the positive case, we cannot use the same argument as in <cit.>.
We will thus argue with a sum of finitely many semigroup operators rather than with the resolvent.
Let L be locally compact Hausdorff and let (e^tA)_t ≥ 0 be a real, individually eventually positive, and persistently irreducible C_0-semigroup on C_0(L).
Then (A) is non-empty.
Consider a positive non-zero vector h ∈ C_0(L) which has compact support S.
According to Corollary <ref>, we have e^tAh ≠ 0 for each t ∈ [0,∞). Using the individual eventual positivity, we can
choose a time t_0 > 0 such that e^tAh ⪈ 0 for each t ≥ t_0.
Due to the persistent irreducibility, Theorem <ref> shows that the semigroup satisfies Condition <ref><ref>.
Hence, for every x ∈ L there exists a time t_x ∈ [t_0, ∞) such that (e^t_xAh)(x) > 0.
For every x ∈ L, we denote the open support of the continuous positive function e^t_x Ah by U_x.
Since x ∈ U_x for each x ∈ L, we have ⋃_x ∈ S U_x ⊇ S.
Hence, due to the compactness of S we can find finitely many points x_1, …, x_m ∈ S such that U_x_1, …U_x_m cover S.
Now consider the bounded linear operator T := e^t_x_1A + … + e^t_x_mA on C_0(L).
Then Th ≥ 0 and for each x ∈ S we have (Th)(x) > 0.
Again by the compactness of S there exists ε > 0 such that (Th)(x) ≥ε for each x ∈ S. Since h vanishes outside of S, it follows that Th ≥δ h for some δ > 0.
We also observe that the operator T has individually eventually positive powers.
To see this, let t_min denote the smallest of the numbers t_x_1, …, t_x_m;
then t_min > 0 since we chose t_0 to be non-zero.
Let f ∈ E_+ and consider t̂∈ [0,∞) such that e^tAf ≥ 0 for each t ≥t̂.
Choose an integer n_0 ≥ 1 such that t_min n_0 ≥t̂.
A brief computation then shows that T^n f ≥ 0 for every n ≥ n_0.
On the other hand, as e^tAh ⪈ 0 for all t ∈ [t_0,∞), the same computation shows that T^nh ⪈ 0 for each n ∈.
So the assumptions of Proposition <ref> are satisfied and thus T has non-zero spectral radius.
Since the spectral radius is subadditive on commuting operators, we conclude that at least one of the operators e^t_x_1A, …, e^t_x_mA has non-zero spectral radius (and hence even each of the operators in the semigroup has non-zero spectral radius).
Therefore, the growth bound of the semigroup is not -∞.
As the growth bound coincides with the spectral bound for individually eventually positive semigroups on general AM-spaces <cit.>, it follows that (A) is non-empty, as claimed.
§ A METHOD TO CONSTRUCT EVENTUALLY POSITIVE SEMIGROUPS
It is natural to ask for examples of eventually positive semigroups that are persistently irreducible but do not have the eventual strong positivity property (<ref>).
Actually, one can easily find positive semigroups that satisfy those conditions – for instance the shift semigroup on L^p over the complex unit circle for any p ∈ [1,∞).
However, it is much less clear how to find non-positive examples with precisely those properties.
The major obstacle is that, for non-positive semigroups, eventual positivity is usually checked by the characterization theorems in <cit.> and <cit.> – but those theorems already yield eventual strong positivity.
Thus, those theorems do not lend themselves to identifying eventually positive semigroups that are not eventually strongly positive.
In this section, we will provide a condition that can be used to construct examples.
Our approach relies on perturbation theory.
It has been demonstrated in <cit.> that perturbation theory for eventual positivity has a number of serious limitations.
However, we will see in this section that more can be said if the perturbation is known to interact well with the unperturbed semigroup.
This allows us to obtain examples of eventual positivity in situations where the leading eigenvalue of the generator is not known and where the eventual positivity is not necessarily strong.
As a result, we can build a semigroup in Example <ref> that has all the desired properties listed above:
it is eventually positive but not positive, and it is persistently irreducible but not eventually strongly positive.
§.§ Positive perturbations
We start with the following perturbation result that follows quite easily from the Dyson–Phillips series expansion of perturbed semigroups.
Let (e^tA)_t ≥ 0 be a C_0-semigroup on a Banach lattice E and let B ∈(E).
If e^tA B e^sA≥ 0 for all s,t ≥ 0, then
e^t(A+B) - e^tA≥ 0
for all t ≥ 0.
Note that under the assumptions of the proposition, B is positive.
From the perspective of eventual positivity, the point of Proposition <ref> is that any kind of eventual positivity of (e^tA)_t ≥ 0 is inherited by the perturbed semigroup (e^t(A+B))_t ≥ 0.
Let us also point out that if A is real, then so are all semigroups in the proposition, and hence one can rewrite the conclusion as
e^t(A+B)≥ e^tA
for every t ≥ 0.
We use the Dyson-Phillips series representation of the perturbed semigroup <cit.>:
define V_0(t) := e^tA for each t ≥ 0 and, inductively,
V_n+1(t)
:=
∫_0^t e^(t-s)A B V_n(s) s
for each t ≥ 0 and each integer n ≥ 0, where the integral is to be understood in the strong sense.
Then
e^t(A+B) = ∑_n=0^∞ V_n(t)
for each t ≥ 0, where the series converges absolutely with respect to the operator norm.
It follows from the assumption of the proposition that V_1(t) ≥ 0 for all t ≥ 0.
Since, also due to the assumption, e^tAB ≥ 0 for all t ≥ 0, one thus obtains inductively that V_n(t) ≥ 0 for each t ≥ 0 and each integer n ≥ 1.
Thus, e^t(A+B) - e^tA = ∑_n=1^∞ V_n(t) ≥ 0 for every t, as claimed.
Proposition <ref> gives us the option to construct examples of eventually positive semigroups by easy perturbations, without having precise spectral information about the perturbed operator A+B available.
Let us first demonstrate this in a very simple finite-dimensional example.
Consider the self-adjoint 3 × 3-matrix
A :=
[ 7 -1 3; -1 7 3; 3 3 3 ]
=
U
[ 0 ; 8 ; 9 ]
U^*
,
where U = (u_1 u_2 u_3) is the unitary matrix with the columns
u_1 :=
1/√(6)[ -1; -1; 2 ]
,
u_2 :=
1/√(2)[ 1; -1; 0 ]
, and
u_3 :=
1/√(3)[ 1; 1; 1 ]
.
As the rescaled semigroup (e^-9t e^tA)_t ≥ 0 converges to the rank-1 projection u_3 u_3^* as t →∞, we see that (e^tA)_t ≥ 0 is (uniformly) eventually strongly positive.
(Alternatively, one could also derive this from <cit.>.)
However, one can say more in this concrete example:
From the diagonalization of A one obtains by a short computation that
A^n =
[ - + - + ; - + - + ; ]
for every integer n ≥ 1 (but not for n=0, as we dropped the term 0^n from the formula).
So for every t ≥ 0, the third row and the third column of e^tA are positive.
Now consider the perturbation
B :=
[ 0 ; 0 ; b ]
for any number b ≥ 0.
Then the matrix e^tA B e^sA is positive for all s,t ≥ 0, so it follows from Proposition <ref> that the matrix semigroup generated by
A + B
=
[ 7 -1 3; -1 7 3; 3 3 3 + b ]
is (uniformly) eventually strongly positive for any fixed number b ≥ 0.
In the above example, note that B does not commute with A if b ≠ 0, so A+B does not have the same eigenvectors as A.
Hence, we were able to derive information about the eventual positivity of (e^t(A+B))_t ≥ 0 without knowing the eigenvectors of the generator.
In Example <ref>, we will use the above matrix semigroup to construct an eventually positive semigroup (in infinite-dimensions) that is persistently irreducible but not eventually strongly positive.
For this reason, it is desirable n the situation of Proposition <ref>, to have a way to check whether the perturbed semigroup is persistently irreducible.
The following result is useful for this purpose.
Let (e^tA)_t ≥ 0 be a C_0-semigroup on a Banach lattice E and let B ∈(E).
Assume that e^tA B e^sA≥ 0 for all s,t ≥ 0
and that the semigroup (e^tA)_t ≥ 0 is individually eventually positive
(hence, so is (e^t(A+B))_t ≥ 0 according to Proposition <ref>).
Let I ⊆ E be a closed ideal that is uniformly eventually invariant under the perturbed semigroup (e^t(A+B))_t ≥ 0.
Then I is uniformly eventually invariant under both the unperturbed semigroup (e^tA)_t ≥ 0 and the operator family (e^tABe^sA)_s,t ≥ 0.
By saying that I is uniformly eventually invariant under (e^tABe^sA)_s,t ≥ 0 we mean that e^tABe^sAI ⊆ I for all sufficiently large s and t – i.e., the index set [0,∞) × [0,∞) is endowed with the product order given by (s_1, t_1) ≤ (s_2, t_2) if and only if s_1 ≤ s_2 and t_1 ≤ t_2.
We first show the eventual invariance under (e^tA)_t ≥ 0: let 0 ≤ f ∈ I.
Then it follows from the individual eventual positivity of the unperturbed semigroup and Proposition <ref> that 0 ≤ e^tA f ≤ e^t(A+B)f ∈ I for all sufficiently large t (where the time from which on the first inequality holds, depends on f).
Hence, I is individually eventually invariant under (e^tA)_t ≥ 0.
As I is closed, Corollary <ref> even gives the uniform eventual invariance.
To show the (individual and whence also uniform) eventual invariance under the family (e^tABe^sA)_s,t ≥ 0, choose times t_0,s_0 ≥ 0 such that 0 ≤ e^t(A+B)I ⊆ I for all t ≥ t_0 and e^sAI ⊆ I for all s ≥ s_0.
Let t ≥ t_0 and s ≥ s_0.
First consider a vector 0 ≤ f ∈ I that is contained in A = A+B.
Then
e^t(A+B) e^sA f ∈ I.
Since f ∈A = A+B we can take the derivative either with respect to s or t.
As I is closed this yields
e^t(A+B) A e^sA f ∈ I
and
e^t(A+B) (A+B) e^sA f ∈ I.
Thus, e^t(A+B) B e^sAf ∈ I.
By assumption Be^sAf and e^tA B e^sAf are positive, so it follows by means of Proposition <ref>, that
0 ≤ e^tA B e^sAf ≤ e^t(A+B) B e^sAf and thus,
e^tA B e^sAf ∈ I.
Finally, consider a general vector 0 ≤ f ∈ I.
Choose an (f-dependent) time r_0 ≥ s_0 such that e^rAf ≥ 0 for all r ≥ r_0 and define
f_n := 1/n∫_0^1/n e^rA e^r_0A f r for each integer n ≥ 1.
Then one has 0 ≤ f_n ∈ I ∩A for each n, and thus e^tA B e^sA f_n ∈ I by what we have already shown.
Since f_n → e^r_0A f, it follows that e^tA B e^(s+r_0)A f ∈ I for all t ≥ t_0 and all s ≥ s_0.
This shows that I is individually eventually invariant with respect to the operator family (e^tABe^sA)_s,t ≥ 0,
and the uniform eventual invariance thus follows from Corollary <ref>.
§.§ Coupling eventually positive semigroups
A trivial way to construct a new eventually positive semigroup from two given ones is to take their direct sum;
this construction is arguably not particularly interesting, though, as the new semigroup does not show any kind of interaction between the two original ones.
In particular, the new semigroup will not be (persistently) irreducible, even if the two original semigroups are.
By employing the simple perturbation result from Proposition <ref> and the eventual invariance result from Theorem <ref> we will now demonstrate how a coupling term between the two semigroups can be introduced that destroys neither eventual positivity nor persistent irreducibility.
Let (e^tA_1)_t ≥ 0 and (e^tA_2)_t ≥ 0 be C_0-semigroups on Banach lattices E_1 and E_2, respectively.
Let B_12: E_2 → E_1 and B_21: E_1 → E_2 be bounded linear operators
such that
e^tA_1 B_12 e^sA_2: E_2 → E_1
and
e^tA_2 B_21 e^sA_1: E_1 → E_2
are positive for all s,t ≥ 0
(so in particular, B_12 and B_21 are positive).
* If both semigroups (e^tA_1)_t ≥ 0 and (e^tA_2)_t ≥ 0 are individually/uniformly eventually positive, then so is the semigroup
(e^tC)_t ≥ 0 on E_1 × E_2 generated by
C :=
[ A_1 ; A_2 ]
+
[ B_12; B_21 ]
.
* If both (e^tA_1)_t ≥ 0 and (e^tA_2)_t ≥ 0 are individually eventually positive and persistently irreducible and both B_12 and B_21 are non-zero, then (e^tC)_t ≥ 0 is also persistently irreducible.
(a)
Due to Proposition <ref>, we have that e^tC - e^tA_1⊕ e^tA_2≥ 0 for each t ≥ 0, from which the assertion readily follows.
(b)
Let I ⊆ E_1 × E_2 be a closed ideal that is uniformly eventually invariant under (e^tC)_t ≥ 0.
Then I = I_1 × I_2 for closed ideals I_1 ⊆ E_1 and I_2 ⊆ E_2.
It follows from Theorem <ref> that we can find a time t_0 ≥ 0 such that I = I_1 × I_2 is invariant under both e^tA_1⊕ e^tA_2 and
[ 0 e^tA_1 B_12 e^sA_2; e^tA_2 B_21 e^sA_1 0 ]
for all s,t ≥ t_0.
As (e^tA_1)_t ≥ 0 and (e^tA_2)_t ≥ 0 are persistently irreducible, it follows that I_1 = {0} or I_1 = E_1 and likewise for I_2.
So it only remains to check that the two cases (1) I_1 = E_1, I_2 = {0} and (2) I_1 = {0}, I_2 = E_2 cannot occur.
Suppose I_1 = E_1.
As B_21 is non-zero, the spaces E_1,E_2 are non-zero, so we can choose a vector 0 ⪇ f_1 ∈ I_1.
Moreover, as the dual operator B_21' is also non-zero, there exists 0 ⪇φ_2 ∈ E_2' such that B_21'φ_2 ≠ 0.
Due to Theorem <ref>, the semigroup (e^sA_1)_s ≥ 0 satisfies Condition <ref><ref>, so there exists s ≥ t_0 such that
⟨φ_2, B_21 e^sA_1 f_1 ⟩
=
⟨ B_21' φ_2, e^sA_1 f_1 ⟩
0
and hence, B_21e^sA_1 f_1 ≠ 0.
By applying Corollary <ref> to the semigroup (e^tA_2)_t ≥ 0, we conclude that e^t_0 A_2 B_21e^sA_1 f_1 ≠ 0.
Due to the choice of t_0, we have the inclusion e^t_0 A_2 B_21e^sA_1 I_1 ⊆ I_2, so it follows that I_2 ≠{0}, as claimed.
By swapping the roles of I_1 and I_2, we see that the case I_1 = {0}, I_2 = E_2 cannot occur, either.
The situation in Corollary <ref> can be interpreted in a system theoretic sense:
On the state spaces E_1 and E_2 consider the input-output-systems
ẋ_1 = A_1 x_1 + B_1 u_1, ẋ_2 = A_2 x_2 + B_2 u_2,
y_1 = C_1 x_1, y_2 = C_2 x_2,
respectively,
where B_k: U_k → E_k are bounded linear operators defined on Banach spaces U_k and C_k: E_k → Y_k are bounded linear operators to Banach spaces Y_k.
Here, u_k: [0,∞) → U_k are interpreted as input signals and y_k: [0,∞) → Y_k are interpreted as output signals for each k ∈{1,2}.
Now we consider bounded linear operators G_21: Y_1 → U_2 and G_12: Y_2 → U_1 and couple both systems by setting u_1 := G_12 y_2 and u_2 := G_21y_1.
Thus we obtain the coupled differential equation
[ ẋ_1; ẋ_2 ][ A_1 ; A_2 ][ x_1; x_2 ]
+
[ B_1 G_12 C_2; B_2 G_21 C_1 ][ x_1; x_2 ]
,
which leads to the operator described in Example <ref>(a) if one sets B_12 := B_1 G_12 C_2 and B_21 := B_2 G_21 C_1.
We know from Proposition <ref> that if an irreducible uniformly eventually positive semigroup is analytic, then it is even uniformly eventually strongly positive. In the following, we use Corollary <ref> to construct a non-positive example which shows that one cannot drop the analyticity assumption in this proposition.
A non-positive but uniformly eventually positive semigroup which is persistently irreducible but not individually eventually strongly positive in the sense of (<ref>).
Set E_1 := ^3 and E_2 := L^1().
Let A_1 ∈^3 × 3 be the matrix A from Example <ref>, which generates a (uniformly) eventually strongly positive semigroup according to that example.
Moreover, we choose a semigroup (e^tA_2)_t ≥ 0 on L^1() as follows:
for each t ∈ (0,∞) let k_t ∈ L^1() be the density function of the Gamma distribution whose mean and variance are both equal to t, i.e., k_t(x) = 1/Γ(t) x^t-1 e^-x for x ∈ (0,∞) and k_t(x) = 0 for x ∈ (-∞,0].
Then k_s ⋆ k_t = k_s+t for all s,t > 0 and the convolution with k_t defines a positive C_0-semigroup on L^1().
Additionally, for every t ≥ 0, let L_t: L^1() → L^1() be the left shift by t.
Since each L_t commutes with convolutions, we obtain a C_0-semigroup (e^tA_2)_t ≥ 0 on L^1() by setting
e^tA_2 f := L_t (k_t ⋆ f)
for all t > 0 and f ∈ L^1() (and e^0A_2 := 𝕀).
This semigroup is clearly positive and it is (persistently) irreducible for the following reason:
let 0 ≤ f ∈ L^1() be non-zero and choose x_0 ∈ such that f has non-zero integral over (-∞, x_0).
Then the strict positivity of k_t on (0,∞) implies that k_t ⋆ f is strictly positive on [x_0,∞) for every t > 0.
Hence, e^tA_2f is strictly positive on [x_0-t,∞) for every t > 0.
So Condition <ref><ref> is satisfied and hence, persistent irreducibility of (e^tA_2)_t ≥ 0 follows by Proposition <ref>.
We now couple the C_0-semigroups (e^tA_1)_t ≥ 0 and (e^tA_2)_t ≥ 0 to obtain a semigroup on E_1 × E_2 = ^3 × L^1() as described in Corollary <ref>(a):
choose the operator B_21: ^3 → L^1() as B_21 z = z_3 _[1,2] for each z= (z_1,z_2,z_3) ∈^3
and B_12: L^1() →^3 as B_12f = ∫_[-2,-1] f(x) x e_3 for each f ∈ L^1(); where e_3 ∈^3 denotes the third canonical unit vector.
Positivity of the semigroup (e^tA_2)_t ≥ 0 and of the third row and column of e^tA_1 for each t > 0 (see Example <ref>) yields that the assumptions of Corollary <ref> are satisfied.
Hence, part (a) of the corollary tells us that the semigroup (e^tC)_t ≥ 0 on E_1 × E_2 generated by the operator
C :=
[ A_1 ; A_2 ]
+
[ B_12; B_21 ]
is uniformly eventually positive.
However, the semigroup is not positive.
To see this we show that, for each z∈^3, the first component of e^tC(z,0) is equal to e^tA_1z for all t < 2.
Indeed, let us use the following notation:
for each n ∈_0 and each t ≥ 0 let V_n(t): E_1 × E_2 → E_1 × E_2 denote the n-th term of the Dyson–Phillips series representation of (e^tC)_t ≥ 0 that we already used in the proof of Proposition <ref> <cit.>.
For each t ∈ [0,∞), one has
V_0(t)
[ z; 0 ]
=
[ e^tA_1z; 0 ] and
V_1(t)
[ z; 0 ]
=
[ 0; ∫_0^t e^(t-s)A_2 B_21 e^sA_1z s ]
.
Further, for each t ∈ [0,2) the function ∫_0^t e^(t-s)A_2 B_21 e^sA_1z s ∈ L^1() only lives within the spatial interval [-1,∞), so it is in the kernel of B_12.
Hence, for all t ∈ [0,2) one has
V_2(t)
[ z; 0 ]
=
[ 0; 0 ]
,
and thus
V_n(t)
[ z; 0 ]
=
[ 0; 0 ] for all n ≥ 2
.
So the first component of e^tC (z, 0) is equal to e^tA_1 z for all t ∈ [0,2), as claimed.
As the semigroup (e^tA_1) is not positive we can choose a vector 0 ≤ z ∈^3 such that e^tA_1z ≱0 for some t ∈ (0,2), so (e^tC)_t ≥ 0 is indeed not positive.
Next, we note that (e^tC)_t ≥ 0 is persistently irreducible.
This follows from Corollary <ref>(b):
the semigroup (e^tA_1)_t ≥ 0 is eventually strongly positive (see Example <ref>) and is thus persistently irreducible.
The semigroup (e^tA_2)_t ≥ 0 is, as observed above, also persistently irreducible.
As B_12 and B_21 are non-zero, part (b) of Corollary <ref> is applicable, as claimed.
Finally, we show that (e^tC)_t ≥ 0 is not eventually strongly positive in the sense of (<ref>).
Indeed, the Dyson–Phillips series representation of (e^tC)_t ≥ 0 and the aforementioned invariance property of the convolution with the kernels k_t can be used to check by induction that, for every z ∈^3 and each t ≥ 0, the second component of V_n(t)(z,0) is supported in the interval [1-t,∞) for each n ∈_0.
Hence, the same is true for the second component of e^tC(z,0) for every t ≥ 0.
So, e^tC(z, 0) is not a quasi-interior point of E_1 × E_2 for any t ≥ 0 and any 0 ≤ z ∈^3.
§.§ Acknowledgements
This article is based upon work from COST Action CA18232 MAT-DYN-NET, supported by COST (European Cooperation in Science and Technology).
The article was initiated during a very pleasant visit of both authors at the University of Salerno in Spring'22.
The first author is indebted to COST Action 18232 and the second author to the Department of Mathematics of the University of Salerno for financial support for this visit.
plainurl
|
http://arxiv.org/abs/2307.05033v1 | 20230711061512 | Towards Anytime Optical Flow Estimation with Event Cameras | [
"Yaozu Ye",
"Hao Shi",
"Kailun Yang",
"Ze Wang",
"Xiaoting Yin",
"Yaonan Wang",
"Kaiwei Wang"
] | cs.CV | [
"cs.CV",
"cs.RO",
"eess.IV"
] |
Towards Anytime Optical Flow Estimation with Event Cameras
Yaozu Ye1, Hao Shi1, Kailun Yang2, Ze Wang, Xiaoting Yin, Yaonan Wang, and Kaiwei Wang2
This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant No. 12174341, in part by Shanghai SUPREMIND Technology Company Ltd, and in part by Hangzhou SurImage Technology Company Ltd.
Y. Ye, H. Shi, Ze Wang, Xiaoting Yin, and K. Wang are with the State Key Laboratory of Modern Optical Instrumentation and the National Engineering Research Center of Optical Instrumentation, Zhejiang University, Hangzhou 310027, China (email: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]).
K. Yang and Y. Wang are with the School of Robotics and the National Engineering Research Center of Robot Visual Perception and Control Technology, Hunan University, Changsha 410082, China (email: [email protected]; [email protected]).
H. Shi is also with Shanghai SUPREMIND Technology Co., Ltd, Shanghai 201210, China (email: [email protected]).
1Equal contribution.
2Corresponding authors: Kaiwei Wang and Kailun Yang.
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Event cameras are capable of responding to log-brightness changes in microseconds. Its characteristic of producing responses only to the changing region is particularly suitable for optical flow estimation. In contrast to the super low-latency response speed of event cameras, existing datasets collected via event cameras, however, only provide limited frame rate optical flow ground truth, (e.g., at 10Hz), greatly restricting the potential of event-driven optical flow. To address this challenge, we put forward a high-frame-rate, low-latency event representation Unified Voxel Grid, sequentially fed into the network bin by bin. We then propose EVA-Flow, an EVent-based Anytime Flow estimation network to produce high-frame-rate event optical flow with only low-frame-rate optical flow ground truth for supervision. The key component of our EVA-Flow is the stacked Spatiotemporal Motion Refinement (SMR) module, which predicts temporally-dense optical flow and enhances the accuracy via spatial-temporal motion refinement. The time-dense feature warping utilized in the SMR module provides implicit supervision for the intermediate optical flow. Additionally, we introduce the Rectified Flow Warp Loss (RFWL) for the unsupervised evaluation of intermediate optical flow in the absence of ground truth. This is, to the best of our knowledge, the first work focusing on anytime optical flow estimation via event cameras.
A comprehensive variety of experiments on MVSEC, DESC, and our EVA-FlowSet demonstrates that EVA-Flow achieves competitive performance, super-low-latency (5ms), fastest inference (9.2ms), time-dense motion estimation (200Hz), and strong generalization. Our code will be available at <https://github.com/Yaozhuwa/EVA-Flow>.
event-based, optical flow, deep learning.
§ INTRODUCTION
Optical flow estimation is a fundamental task in computer vision, which aims to calculate the motion vector of each pixel in two consecutive frames of images.
This dense prediction task has a wide range of applications in various fields, such as video compression <cit.>, video frame interpolation <cit.>, autonomous driving <cit.>, robot navigation <cit.>, etc.
The accuracy of optical flow estimation directly affects the accuracy of subsequent autonomous perception tasks, making it a hot topic in the field of computer vision <cit.>.
The event camera <cit.> is a new type of bio-inspired sensor that only responds to changes in the brightness of the environment. Compared with the traditional frame-based camera, which integrates the brightness at a certain time interval (exposure time) and outputs an image, the event camera has no exposure time and responds once the log value of the pixel intensity changes beyond a certain threshold, and the response is microsecond-level and asynchronous.
As optical flow estimation aims to estimate the motion of a scene which aligns with the event camera's ability to detect changes, it makes the event camera a suitable choice for optical flow estimation <cit.>.
Furthermore, event cameras offer distinct advantages for optical flow estimation, such as high temporal resolution and high dynamic range (>120dB compared to about 60dB of traditional frame-based cameras <cit.>).
The high dynamic range capability of event cameras can solve the problem of under- or over-exposed images produced by traditional frame cameras in poor illumination conditions (such as at night) or high-dynamic scenes (such as cars entering and exiting tunnels) <cit.>, which can cause inaccurate optical flow estimation in subsequent processing.
The high temporal resolution of event cameras eliminates the problem of motion blur even in high-speed motion scenes.
Additionally, the high temporal resolution output of event cameras provides hardware support for super high-frame-rate optical flow estimation.
However, most current event-based optical flow estimation methods fail to fully exploit the high temporal resolution and low-latency characteristics of events, resulting in low frame rates for the estimated optical flow.
They typically transform the event stream over a time interval into a tensor represented as voxels, which are then fed into a frame-based optical flow estimation network similar to image-based approaches <cit.>.
Moreover, the output frame rate is limited by the underlying dataset.
The two most commonly-used benchmarks for event-based optical flow estimation, MVSEC <cit.> and DSEC <cit.> have low frame rates for their ground-truth optical flow, with DSEC having a frame rate of 10Hz and MVSEC having a frame rate of 20Hz.
While significant progress has been witnessed in the field, the limited temporal resolution greatly restricts the potential of event-based optical flow estimation in real-world applications.
We present EVA-Flow, a deep architecture designed for Event-based Anytime optical Flow estimation.
EVA-Flow delivers numerous benefits, such as low latency, high frame rate, and high accuracy.
Fig. <ref>. (a-c) illustrates the differences between previous event-based optical flow estimation and our proposed approach toward anytime optical flow estimation. While other methods are limited by the dataset's frame rate, this framework can achieve low latency and higher frame rate optical flow outputs. Fig. <ref>. (d) illustrates the discrepancy between the predictions of this model and E-RAFT using a sample from the test sequence of the DSEC dataset. The scene depicts a car making a turn inside a tunnel. Our EVA-Flow offers continuous trajectory tracking within a 100 ms time range of the input sample. The transition from yellow to red indicates varying point positions at different times in the trajectory of the car turning inside the tunnel. In contrast, E-RAFT <cit.> is restricted to generating a single optical flow.
To achieve high-frame-rate event output, we first propose the Unified Voxel Grid (UVG), which generates a bin of UVG representation as soon as the events in a small time interval are ready.
Our UVG representation achieves a higher frame rate up to a factor of N and a lower latency of 1/N compared to the voxel grid representation <cit.>.
N is a hyperparameter that can be adjusted to meet the specific requirements of the task.
These bins are then sequentially fed into our EVA-Flow, which consists of a multi-scale encoder and our stacked Spatiotemporal
Motion Recurrent (SMR) module.
SMR is proposed to predict temporally-dense optical flow and enhances the accuracy via spatial-temporal motion refinement.
Notably, the architecture is specially designed for extreme low-latency optical flow estimation: as each event bin arrives, it is immediately processed through the SMR module, yielding the optical flow result for the current time instance.
This architectural design eliminates the need to wait for all bins of the entire voxel grid to arrive in order to achieve high-frame-rate optical flow output.
Aside from the representation, for optical flow estimation, it is crucial to have a reliable method of measuring the consistency of the flow from one frame to the next.
To achieve this, a loss function is usually designed to ensure that the optical flow converges.
However, in the absence of ground truth for supervision, we do not directly supervise the intermediate optical flow, but instead, implicitly supervise it using the warping mechanism.
Additionally, we introduce the Rectified Flow Warp Loss (RFWL) as a method to accurately assess the precision of event-based optical flow in an unsupervised manner. Moreover, we utilize RFWL to assess the accuracy of our time-dense optical flow estimation.
By comparing our EVA-Flow with other event-based optical flow estimation methods on widely recognized public benchmarks, including the DSEC <cit.> and MVSEC <cit.> datasets, we have determined that our approach attains performance comparable to the state-of-the-art E-RAFT <cit.> method, with the added benefits of lower latency (5ms vs. 100ms), higher frame rates (200Hz vs. 10Hz), and faster single-frame optical flow estimation speed (9.2ms vs. 93ms).
Additionally, the evaluation of time-dense optical flow on DSEC, MVSEC, and EVA-FlowSet underscores the reliability of our framework's time-dense optical flow, even under the constraint of being solely supervised with low-frame-rate optical flow ground truth during training.
Furthermore, the Zero-Shot results obtained from both the MVSEC dataset and our EVA-FlowSet clearly illustrate that our method exhibits superior generalization performance in comparison to E-RAFT.
In summary, we deliver the following contributions:
* We present EVA-Flow, an Event-based Anytime optical Flow estimation framework that achieves high-frame-rate event-based optical flow with low-frame-rate optical flow ground truth as its sole source of supervision. EVA-Flow is characterized by its low latency, high frame rate, fast inference, and high accuracy.
* We propose the Unified Voxel Gird representation, which is of high frame rate and low latency.
* We propose RFWL to accurately evaluate the precision of event-based optical flow in an unsupervised manner.
* Extensive experiments on public benchmarks show EVA-Flow achieves competitive accuracy, super-low latency, fastest inference, time-dense motion estimation, and strong generalization.
§ RELATED WORK
§.§ Optical Flow Estimation
Recently, the field of optical flow estimation has witnessed remarkable advancements owing to the progress of deep learning.
FlowNet <cit.> firstly introduces an end-to-end network with a U-Net architecture to estimate optical flow directly from image frames and many subsequent works <cit.> follow this architecture.
Later, <cit.> utilize a warp mechanism to conduct a multi-scale refinement, achieving higher accuracy with less computational costs.
RAFT <cit.> proposes a recurrent optical flow network that uses all-pair correlations and GRU iterations to produce high-precision flow results through iterative refinement, which achieves a significant improvement in accuracy. Most of the later optical flow studies are, in essence, improvements upon RAFT.
CSFlow <cit.> introduces a decoupled strip correlation layer to enhance the capacity to encode global context.
FlowFormer <cit.> explores the self-attention mechanism in recurrent flow networks to achieve further flow accuracy boosts, albeit with larger parameter usage.
In summary, the use of warp mechanism, correlation volume, and RNN with iterative refinement has brought about significant advancements in the field of optical flow, allowing for improved accuracy and more efficient computation.
Unlike previous works, we explore anytime optical flow estimation with event cameras. We aim to overcome the limited time interval imposed by existing flow datasets and achieve ultra-high-frame-rate flow estimation with high accuracy.
§.§ Event-based Optical Flow
Event optical flow estimation is mainly divided into model-based methods and learning-based methods.
There are two main approaches to model-based event optical flow estimation.
One approach is to fit the local spatiotemporal plane of the event point cloud and use the plane fitting parameters to compute the optical flow value <cit.>.
Another approach builds on the contrast maximization framework <cit.>, which calculates the optical flow by optimizing the contrast of the motion-compensated event frame <cit.>. However, model-based event optical flow algorithms <cit.> generally require denoising algorithms like <cit.> for event preprocessing to improve the accuracy.
Thanks to the availability of large-scale event optical flow datasets <cit.>, learning-based event optical flow estimation algorithms <cit.> have achieved superior accuracy compared to model-based algorithms, which is also more robust to event noise. Next, we provide a brief review of works on learning-based event optical flow estimation.
EV-FlowNet <cit.> firstly utilizes bilinear interpolation on discrete events in both spatial and temporal dimensions. This transformation converts sparse and continuous events into a voxel-based tensor called Voxel Grid.
Subsequently, the Voxel Grid is input to FlowNet <cit.> for estimating event optical flow.
This paradigm has been adopted by numerous subsequent works <cit.>.
E-RAFT <cit.> achieves remarkable flow accuracy boosts by using event VoxelGrids with two-time intervals as the network's input, recognized as the state of the art on the DSEC dataset <cit.>.
Recently, TMA <cit.> and IDNet <cit.>, have surpassed E-RAFT's performance for their better utilization of the rich information in the temporal dimension of event data.
The TMA network utilizes the temporal continuity of event input by calculating the correlation between the first bin and all other bins of the event Voxel Grid to acquire more precise motion information, achieving higher accuracy than E-RAFT with fewer iterations.
IDNet sequentially feeds event Voxel Grids into RNN and estimates the flow for the entire time duration.
Next, the network cascade and warp mechanisms conduct fine-grained processing on the flow, using the previously predicted flow in each cascade for direct deblurring of the event bins.
While both TMA and IDNet make use of the continuous temporal information of event data, they can only achieve low-frame-rate optical flow outputs that match the frame rate of the ground truth, i.e., 10FPS on DSEC.
The event information provides brightness variation information of high temporal resolution, but only a few high-frame-rate event optical flow estimation methods are available.
The limitation of the ground-truth frame rate of current datasets of event optical flow is one reason for this circumstance.
Due to the lack of time-continuous ground truth, several studies <cit.> have employed unsupervised contrast-maximization-based losses <cit.>, resulting in lower accuracy for these methods.
Ponghiran et al. <cit.> utilize recurrent neural networks to obtain temporally-dense optical flow results, which require sequential training and only evaluate the flow results at time locations with ground truth. The intermediate optical flow of this method has not undergone validation, and the final optical flow accuracy is relatively low.
In contrast, our proposed approach, EVA-Flow, achieves a high level of accuracy, extremely low data latency, high frame rates, and excellent model generalization through comprehensive end-to-end training, requiring only low frame rate optical flow ground truth as supervision.
§ METHODOLOGY
In this paper, we propose, for the first time, an Event Anytime Flow Estimation (EVA-Flow) framework, as shown in Fig. <ref>, which achieves high accuracy, high frame rate, and low latency with supervision solely dependent on low frame rate optical flow.
EVA-Flow is designed to overcome the frame rate limitations of existing event-based optical flow datasets and produce temporally-dense optical flow estimation with accuracy boosts compared to contemporary methods.
We first put forward a Unified Voxel Grid Representation (UVG) in Sec. <ref>, which generates event representations with low latency.
These UVG bins are sequentially fed into EVA-Flow, where high-frame-rate optical flow estimations are generated with low latency using a Spatiotemporal Motion Recurrent (SMR) module, which is detailed in Sec. <ref>.
Then, the details of the loss function and the supervision regime are described in Sec. <ref>.
Finally, we propose a new criterion to evaluate the reliability of intermediate high-frame-rate optical flow in Sec. <ref>.
§.§ Event Representation: Unified Voxel Grid
To achieve anytime event-based optical flow estimation, we propose an Unified Voxel Grid (UVG) with high-frame-rate representations.
The event representation is crucial for the event-based model <cit.>.
Voxel Grid <cit.> is a commonly used event representation that utilizes the temporal dimension of event data, and its effectiveness for optical flow estimation tasks has been verified in many studies <cit.>.
The principle of the Voxel Grid is to discretize the spatially sparse and temporally-continuous information of events by averaging over time, and then utilize bilinear interpolation in both spatial and temporal dimensions to obtain a tensor representation of the event data in a voxel form.
The tensor has a dimension of B×H×W, where B represents the number of time steps, and its size determines the sampling accuracy of the event data in the temporal dimension.
Each channel of the Voxel Grid can be viewed as an event representation at a specific time.
Given a set of events {(x_i, y_i, t_i, p_i)}_i ∈[1, N] and B bins to discretize the time dimension, the definition of Voxel Grid is depicted as follows:
t_i^* = (B-1)(t_i-t_1) /(t_N-t_1),
k_b(a) = max (0,1-|a|),
VG(x, y, t) = ∑_i p_i k_b(x-x_i) k_b(y-y_i) k_b(t-t_i^*),
where k_b(a) denotes the bilinear sampling kernel.
To achieve continuous predictions of high-frame-rate optical flow, it is necessary to obtain high-frame-rate event representations.
Every input requires a uniform format for event representation.
However, the Voxel Grid <cit.> represents the first and last channels differently as compared to the others, as shown in Fig. <ref>.
The event interpolation time range utilized for the first and last channels in Voxel Grid is half of that used for the other channels, leading to inconsistency, which is adverse to continuous predictions.
As a solution, we propose Unified Voxel Grid.
In contrast to Voxel Grid, we fix the time interval τ for each bin.
And each bin is created through the interpolation of events from the adjacent τ time-step (t_b-τ<e_i[t]<t_b+τ).
The definition of UVG is as follows:
Bin_b(x,y) = ∑_i p_i k_b(x-x_i) k_b(y-y_i) k_b(t-t_b/τ)
UVG = concat([Bin_0, Bin_1,...,Bin_B-1]).
This design makes it possible to obtain the current representation of time for each τ (7.14ms for 15 channels with Unified Voxel Grid on DESC), while the Voxel Grid must wait for the entire period of (B-1)×τ (100ms on DSEC) to generate a complete event representation.
This low-latency input can be coupled with our low-latency time-dense optical flow estimation architecture for tracking the optical flow at each τ timestep as it arrives which is essential for high-frame-rate optical flow estimation.
§.§ Event Anytime Flow Estimation Framework
In this subsection, we propose the Event Anytime Flow (EVA-Flow) framework, which predicts time-dense optical flow from serial event bins with low latency.
The architecture of EVA-Flow is illustrated in Fig. <ref>.
The raw events are converted to UVG and then sequentially fed into the encoder, bin by bin, to generate a 4-level feature pyramid (f_i^1 → f_4^N).
The feature pyramid is fed into our stacked Spatiotemporal Motion Recurrent (SMR) modules to update the optical flow in both temporal and spatial dimensions while outputting the refined optical flow results from t_0 to the current bin.
Serialized, low-latency input and output.
The UVG inputs are sequentially fed into the network, which contributes to the low latency at the data entry level.
During the inference phase, after entering an event bin (excluding the first bin used for initialization), the current optical flow can be directly predicted using the encoder and the stacked SMR module.
During the training phase, in order to facilitate end-to-end training, we follow the frame rate of the dataset and input a full period of UVG at once as a training sample. Following the UVG input of N, B, H, W, the UVGs of different bins (i.e. with different channels) are converted to the batch dimension and concatenated (resulting in a dimension of N×B, H, W) before being passed through the encoder.
Once the feature maps are obtained, the corresponding feature maps of different bins are sequentially input into the stacked SMR module, which outputs time-dense optical flow (𝐕_0, 1→𝐕_0, B-1).
Spatiotemporal Motion Recurrent (SMR) module.
In order to predict time-dense optical flow and achieve high accuracy through iterative refinement, we propose the stacked SMR modules, whose structure is illustrated in Fig. <ref>.
In the vertical direction, as illustrated in Fig. <ref> and Fig. <ref>, SMR performs an iterative refinement of optical flow at the current moment in the spatial dimension.
The input of this module includes the hidden state of the lower-resolution SMR module from the previous level, the optical flow output, and the feature inputs of the current resolution level.
First, we warp the input feature f_j^i using the rough optical flow estimated from the previous level to obtain the post-warp features f̂_̂ĵ^̂î.
Next, the optical flow 𝐕_0,j^i-1 from the previous level, the warped features, and the upsampled hidden state Ht_j^i-1 from the previous level are concatenated, and used as input for the Convolutional Gated Recurrent Unit (ConvGRU) <cit.>.
Subsequently, the hidden state output of the current ConvGRU is used by the Flow Head to predict the residual optical flow.
We employ two convolutional layers for the Flow Head, which is the same as RAFT <cit.>.
The refined optical flow 𝐕_0,j^i is achieved by adding the predicted residual optical flow Δ𝐕_0,j^i to the previously predicted optical flow 𝐕_0,j^i-1. The current level's predicted optical flow and hidden state are also inputted to the next level in the same way, for further refinement. This continues until the final level predicts the ultimate refined optical flow.
In the horizontal direction in Fig. <ref>, SMR continuously estimates optical flow at each time step.
In general, SMR serves as a motion-updating module in both spatial and temporal dimensions.
Within the same layer in the horizontal direction, SMR modules leverage shared weights, while in the vertical direction, different SMR weights are used due to varying input resolutions and channel numbers.
In general, the number of SMRs is equal to the number of levels for spatial refinement of the optical flow estimation.
Time-dense feature warping.
Unlike other methods <cit.> that rely on the assumption of uniform motion between two ground truth flow, our approach can handle variations in motion speeds throughout the entire UVG period.
In each warping step, we use the dense temporal optical flow estimation of the previous level to warp the features of each time step.
This structure guarantees consistent prediction patterns of optical flow for each resolution level and time step (SMR modules at the same level are using shared weights), implicitly providing SMR modules with the ability to predict optical flow at the current time step.
By implementing this approach, we only need to supervise the final output 𝐕_0,B-1 for implicit monitoring of optical flow at intermediate time steps.
§.§ Supervision
Following E-RAFT <cit.>, we also used L1 loss as the loss function. Specifically, we calculate the L1 distance between the ground-truth optical flow and the last optical flow prediction 𝐕_0,B-1 of our model as the final loss function.
The loss function is defined as follows:
Loss=||𝐅_gt-𝐅_pre||_1.
Due to network structure design (see explanation in Sec. <ref> in Time-dense feature warping), we don't need to supervise the optical flow prediction at intermediate time steps.
In this way, we overcome the frame rate limitation of the dataset and achieve low-latency, high-frame-rate, and high-precision event optical flow estimation.
§.§ Rectified Flow Warp Loss
Motion compensation <cit.> of events involves aligning all events to a reference time using estimated event optical flow. Specifically, for every event, using the flow value of the event position and the time difference between the event occurrence time and the reference time, we can calculate the position of the event point at the reference time. This process is referred to as motion compensation.
The motion-compensated event count image is called Motion-Compensated (MC) event frame.
Given events E=({(x_i, y_i, t_i, p_i)}_i ∈[1, N]), per-pixel flow estimation 𝐕 and a reference time t^', the MC frame is defined as follow:
I(E, ϕ)=([ x_i^'; y_i^' ])=([ x_i; y_i ])+(t^'-t_i)𝐕(x_i,y_i).
Accurate flow estimation ensures that events generated by the same edge are aligned to the same pixel location, resulting in higher contrast in the MC frame than in the original event frame.
Stoffregen et al. <cit.> utilized this principle to propose a Flow Warp Loss (FWL), which can assess the accuracy of event optical flow estimation in an unsupervised way.
FWL:=σ^2(I(E, 𝐕))/σ^2(I(E, 0))
In theory, a more accurate estimation of optical flow results in a clearer, higher contrast, and larger variance for the image after motion compensation.
This is because events occurring at the same location in the scene are aligned to the same point. Therefore, a more accurate prediction of optical flow leads to a larger FWL. In general, the image tends to become sharper after motion compensation, indicating an FWL value greater than 1.
However, in practice, we observe that FWL can be smaller than 1, yet the MC frame is clearly sharper than the original one, as shown in Fig. <ref>.
We have identified the same phenomena (FWL 1) in another work <cit.>.
After analysis, we discover that motion compensation has warped some events out of the image, resulting in fewer events in the MC frame compared to the original frame.
Thus, we propose a Rectified Flow warp Loss (RFWL), which normalizes image brightness based on the total event count (i.e., the sum of pixel values) in the event frames before and after motion compensation.
The formulation is given as follows:
RFWL:=σ^2(I(E, 𝐕)/∑ I(E, 𝐕))/σ^2(I(E, 0)/∑ I(E, 0)).
As shown in Fig. <ref>, FWL is inadequate at precisely assessing the accuracy of optical flow estimation.
MC frames, as shown in columns 1, 2, and 4 exhibit sharper resolution compared to the original event frames, whereas the FWL values remain less than 1. Interestingly, the MC frame in column 3, which has a significantly sharper resolution than the original event frame, yields an FWL value of only 1.12.
The FWL values of these examples are inconsistent with the actual observational results. Conversely, our proposed RFWL can precisely indicate the relative sharpness of MC frames in relation to the original event frames.
§ EXPERIMENTS
Our model underwent evaluation on the DSEC <cit.>, MVSEC <cit.>, and self-collected EVA-FlowSet.
Sec <ref> offers an introduction to the aforementioned datasets.
Sec. <ref> presents the implementation details of our approach.
Sec. <ref> demonstrates the evaluation of our model using the regular flow evaluation prototype.
Sec. <ref> reveals the evaluation results of our model for time-dense optical flow.
Sec. <ref> encompasses ablation studies.
§.§ Datasets
DSEC. The DSEC dataset, introduced by Gehrig et al. <cit.>, consists of 24 sequences captured in real-world outdoor driving scenes. The dataset includes a total of 7,800 training samples and 2,100 testing samples, captured at a resolution of 640×480. It covers both daytime and nighttime scenarios, encompassing small and large optical flows. Moreover, the DSEC dataset offers high-precision sparse ground truth values for optical flow.
As the dataset does not include an official validation set, we divided the training dataset randomly using a fixed seed. To create the training and validation sets, we allocated them in a 4:1 ratio. Consequently, only 80% of the training data was utilized for training our model.
MVSEC. MVSEC <cit.> is a classic real event optical flow dataset that comprises sequences from various indoor and outdoor scenes with a resolution of 346×260. The event density of the dataset is relatively sparse, and the optical flow ground truth distribution mostly highlights small flows. As a result, two different time intervals, i.e., dt=1 and dt=4, are used for accuracy evaluation.
dt=1 means that adjacent grayscale frames are used as a sample, whereas dt=4 represents a four-fold increase in the grayscale frame interval. We follow the E-RAFT convention <cit.>, conduct training solely on the sequence, and evaluate on 800 samples data from .
EVA-FlowSet. We collect and present EVA-FlowSet, a real-world dataset to assess the generalization and time-dense optical flow of our model.
For data collection, we utilized a Davis-346 camera, which is the same type used in the MVSEC dataset. The dataset consists of four sequences in total, with two sequences depicting fast-moving scenes and the other two showcasing regular motion scenes. During the data-collection process, the camera remains stationary, capturing the movement of a checkerboard calibration board as it travels along a curved trajectory at different predetermined speeds.
§.§ Implementation Details
For the DSEC dataset <cit.>, we use the Adam optimizer with a batch size of 6 and a learning rate of 5×1e^-4 to train for 100k iterations on the training set we divided, which accounts for 80% of the total dataset. Then, we reduce the learning rate by a factor of 10 and continue training for another 10k iterations. We applied the same training settings to networks employing different numbers of bins. During the model training on the DSEC dataset, two online data augmentation methods, namely random cropping and horizontal flipping, were employed. Random cropping was performed with a size of 288×384, while horizontal flipping was applied with a probability of 50%.
For the MVSEC dataset <cit.>, we employ the Adam optimizer with a batch size of 3 and a learning rate of 1×1e^-3 to train for 120k iterations.
We utilize random cropping with a size of 256×256 and horizontal flipping with a probability of 50% during training.
For MVSEC scenes with dt=1, we use the setting with bins=3; for MVSEC with dt=4, we use the setting with bins=9.
§.§ Regular Flow Evaluation Prototype
Evaluation on DSEC.
Tab. <ref> displays the evaluation results on the DSEC flow benchmark <cit.>.
It is worth noting that this approach is the only optical flow estimation method that has ultra-low data latency (5ms) and a high output frame rate (200Hz), while also achieving competitive accuracy compared to state-of-the-art methods.
This approach can estimate the current optical flow as soon as the event data arrives every 5ms, without needing to wait for the entire sample period (100ms) to be completed, resulting in extremely low data latency and a super high frame rate(200Hz). In contrast, other approaches, constrained by the frame rate of the ground truth dataset, can only provide optical flow outputs at 10 Hz. Furthermore, the computational cost of a single optical flow estimation in this approach is remarkably low. Our runtime for a single optical flow estimation is the lowest among other methods, with only 10% of E-RAFT's runtime. (The runtime was obtained by testing on the 2080Ti.)
The qualitative results of our method on the DSEC flow dataset are shown in Fig. <ref>. To make a more intuitive comparison of the accuracy of optical flow estimation, we also visualized the motion-compensated (MC) event frames.
The sharper the MC frames, the more precise the optical flow estimates. It can be seen from the Regions of Interest (ROIs) in Fig. <ref> that, compared with E-RAFT, our model can better distinguish the outline of small objects (such as poles) and obtain more accurate optical flow estimates. Our analysis indicates that the implicit use of warp-aligned edges of events in our approach to obtain motion features is the distinguishing factor compared to E-RAFT, which instead uses correlation volume. For small objects with simple and clear contours, it is relatively easy to warp and align event features.
However, due to the limited textures, the correlation-based method, which relies on measuring feature similarity to obtain motion features, fails to obtain high-quality motion features when there are no sufficient texture features, resulting in the inferior performance of E-RAFT in these areas.
Evaluation on the MVSEC dataset.
The evaluation results of the MVSEC benchmark <cit.>
are presented in Tab. <ref>.
SSL, USL, and SL stand for self-supervised-, unsupervised-, and supervised learning, respectively.
While our model's accuracy is slightly lower than that of E-RAFT based on the data in the table, it is worth highlighting that our zero-shot results display a significantly better accuracy when directly testing the performance of the DSEC-trained model on the MVSEC dataset.
Especially noteworthy is our model's performance at dt=4, where our model's EPE is halved compared to that of E-RAFT (0.96 vs. 1.93).
These results demonstrate the superior generalizability of the proposed EVA-Flow.
The qualitative results of testing MVSEC under the zero-shot setting, i.e., the model is only trained on DSEC, can be found in Fig. <ref>.
Owing to the small optical flow of MVSEC, the motion compensation effect is indistinct under the dt=1 setting.
Hence, we apply a longer time range of grayscale frames, i.e., dt=4.
There is a marked discrepancy in motion speed between the MVSEC and DSEC datasets, as depicted in Fig. <ref> and Fig. <ref>. More specifically, the event images in the MVSEC dataset display minimal motion blur caused by slow movement, whereas the DSEC dataset showcases significant motion blur attributed to rapid motion.
Nonetheless, our model is capable of achieving near-ground-truth optical flow results on the MVSEC dataset, while E-RAFT shows inadequate generalization performances, particularly in non-event ground areas.
Additionally, our model demonstrates superior capabilities for small targets such as poles and road markings, accurately depicting their contours with improved motion compensation results.
Evaluation on EVA-FlowSet.
Qualitative results of the curve motion scene dataset EVA-FlowSet can be found in Fig. <ref>.
By visualizing the optical flow images, it can be observed that, for both fast-moving scenes and normal scenes, our model can estimate the contour of moving objects more accurately than E-RAFT <cit.>.
The magnified ROI area shows that our model obtains sharper motion compensation results in the first three rows of fast-moving scenes, while E-RAFT's motion compensation performance is comparable to our model in the last three rows of moderate-moving-speed scenes.
E-RAFT calculates the correlation between the event voxel grid before and after the start time to estimate the optical flow, which implicitly assumes that the optical flow is constant throughout the entire time.
In contrast, our method uses the SMR module iteratively in each time step to obtain continuous optical flow estimation, which is suitable for curve motion scenes.
For moderate-moving-speed scenes, even if the motion trajectory is curved, the motion could be approximated as linear due to the slow motion speed.
In moderate-moving-speed cases, E-RAFT's motion compensation result is comparable to our model. However, in fast-moving scenes, the linear motion assumption of E-RAFT is no longer valid due to the curved nature of the motion trajectory. This scenario is particularly critical for safety concerns. In contrast, our model utilizes a dense optical flow estimation framework that does not depend on the linear motion assumption. Consequently, our model significantly outperforms E-RAFT in fast-moving scenes.
§.§ Time-Dense Optical Flow Evaluation
As the DSEC flow dataset does not provide high frame-rate optical flow ground truth, we are not able to evaluate the precision of our model for temporally dense optical flow directly using the EPE metric.
We propose a new unsupervised metric called RFWL to address this issue and assess the precision of event-based optical flow.
Please refer to Sec. <ref> for more details.
Under the setting of bins=15, our model can obtain 14 consecutive optical flow estimates for every sample from events occurring within 100ms on the DSEC test dataset.
The output V_0,i represents the optical flow estimation ranging from 0ms to 100×i/14ms. In contrast, E-RAFT is limited to obtaining single optical flow estimates within the time range of 0ms to 100ms.
Due to the absence of time-dense ground truth in the DSEC dataset, we directly compare our model with E-RAFT.
We use time-dense optical flow results from our model and E-RAFT's interpolated optical flow to generate motion-compensated frames for events during the relevant time periods.
The demonstration of this process is shown in Fig. <ref>.
We evaluate all test sequences for the DESC-Flow dataset and provide the detailed data in Tab. <ref>.
Among the evaluated seven sequences, our model outperforms E-RAFT in terms of average RFWL in five sequences.
Since movement in the DSEC dataset is predominantly linear, direct interpolation already yields relatively accurate intermediate optical flow.
As a result, our method surpasses E-RAFT by small yet consistent margins on this dataset.
Additionally, our runtime for a single optical flow estimation is only 10% (9.2ms vs. 93ms) of E-RAFT's runtime.
We perform the same comparison on the EVA-FlowSet, which consists of sequences with curved movements. The results, depicted in Fig. <ref>, clearly indicate a substantial performance improvement achieved by our approach in comparison to E-RAFT.
Since our EVA-FlowSet does not provide ground truth optical flow, both E-RAFT and our model rely on checkpoints trained on the DSEC dataset.
Fig. <ref> presents a visual representation of the evaluation results comparing our method with E-RAFT in diverse real-world scenarios with varying motion speeds. Specifically, in two fast-moving sequences, our model demonstrates a notably superior intermediate RFWL compared to E-RAFT.
This occurs due to the nonlinearity of the optical flow caused by the curved trajectory.
Consequently, the interpolated intermediate flow obtained from E-RAFT's prediction proves notably less accurate compared to the time-dense flow predicted by our model.
In the other two scenarios with normal-speed sequences, this model and E-RAFT exhibit comparable RFWL performance.
When the movement speed is not fast enough during motion along a curved trajectory, the motion can be approximated as linear for a brief duration.
As depicted in Fig. <ref>, the qualitative results affirm that our model outperforms E-RAFT in distinguishing motion boundaries, even in scenarios with normal speed. Notably, E-RAFT tends to erroneously extend the optical flow from motion regions to non-motion regions.
However, since non-motion regions have few events, any inaccuracies in optical flow estimation will have no impact on the results of motion compensation. Consequently, in this specific situation, the RFWL of E-RAFT is comparable to that of this model.
Overall, our proposed EVA-Flow is evidenced to be superior in time-dense flow estimation both numerically and qualitatively.
To assess the effectiveness of our time-dense optical flow estimation framework from an alternative perspective, we employ a different approach for the MVSEC dataset.
In Tab. <ref>, we present the zero-shot results of E-RAFT and our model (trained only on DSEC and directly tested on MVSEC, bins=15).
When evaluating the optical flow estimation on the MVSEC dataset using our model, we utilize the same model checkpoint but employ varying bin numbers based on different dt values.
As a result of the divergence in motion speeds between the slow-motion MVSEC dataset and the fast-motion DSEC dataset, we assign bins=4 for dt=1 and bins=13 for dt=4.
This setting is based on the number of bins in our model minus 1, which equals the number of the time-dense optical flow output. Therefore, under this setting, the frame rate of the optical flow output with dt=4 is exactly equal to the frame rate of the optical flow output with dt=1.
And when dt=4 is utilized, it corresponds to an ongoing iteration of the SMR module using dt=1 time step until the optical flow for the entire duration can be forecasted.
By analyzing the data presented in Tab. <ref>, it is apparent that our model achieves a lower endpoint error (EPE) of 0.96 at dt=4, which is less than four times the EPE at dt=1 (0.39). Interestingly, the results indicate that as we progressively iterate and generate optical flow for longer time intervals, the average per-unit-time optical flow error does not increase, but rather decreases. These findings provide clear evidence for the efficacy of our framework in estimating time-dense optical flow.
Moreover, this highlights a key benefit of the adaptable implementation of this framework.
It allows for flexible adjustment of the number of prediction bins, enabling a reduction in the event rate discrepancy per bin between training and prediction data, thus achieving higher accuracy in the deployment stage.
§.§ Ablation Study
Unified Voxel Grid.
Tab. <ref> compares the results of our framework on the DSEC test set using Voxel Grid <cit.> and our proposed Unified Voxel Grid representation. The EPE of our proposed Unified Voxel Grid is 7.3% lower than that of Voxel Grid.
This difference stems from the fact that each event bin in the Unified Voxel Grid adheres to a consistent event representation, while the representation patterns of the first and last bins in the Voxel Grid do not match those of the middle bins (refer to Fig. <ref>).
Investigate Bins Setting.
The design of the proposed framework enables an increase of the optical flow output's frame rate by increasing the number of bins settings in a 100ms sample from the DSEC dataset.
It is essential to ensure that each bin has a sufficient number of events to encompass the motions taking place within the scene. In this study, the number of event bins varies from 6 to 31.
Tab. <ref> illustrates a consistent decrease in the final endpoint error (EPE) of event optical flow as the number of bins increases from 6 to 21. This trend is attributed to the reduction in complexity during optical flow estimation in each iteration when a higher number of bins is used. This reduction is due to a smaller amount of optical flow that needs to be estimated. However, as the number of event bins continues to increase, individual bins start to contain insufficient event information, resulting in a decline in optical flow accuracy when the number reaches 31.
Our final model (bins=21) increases the event output frame rate by 20-fold while maintaining high accuracy.
§ CONCLUSION
In this paper, we propose EVA-Flow, a learning-based model for anytime flow estimation based on event cameras.
Leveraging the unified voxel grid representation, our network is able to estimate dense motion fields bin-by-bin between two temporal slices.
By employing a novel Spatiotemporal Motion Refinement (SMR) module for implicit warp alignment, the proposed EVA-Flow overcomes the low-temporal-frequency issue of event-based optical flow datasets, achieving high-speed and high-frequency flow estimation beyond low-frequency supervised signals.
Compared to the state-of-the-art method E-RAFT, our approach achieves competitive accuracy, fast inference, arbitrary-time flow estimation, and strong generalization.
Real-time arbitrary-time flow estimation is achieved on a single GPU, showcasing the significant potential of EVA-Flow for online visual tasks. We also propose an unsupervised event-based optical flow evaluation metric, referred to as Rectified Flow Warp Loss (RFWL), to validate the time-dense optical flow predictions of our model.
In the future, we intend to seamlessly integrate EVA-Flow into event-based visual odometry to achieve fast, accurate, and high-dynamic-range pose estimation. Unsupervised domain adaptation for fine-grained visual tasks via event cameras would also be a direction of interest to enhance autonomous perception.
IEEEtran
|
http://arxiv.org/abs/2307.05696v1 | 20230709011908 | A Personalized Reinforcement Learning Summarization Service for Learning Structure from Unstructured Data | [
"Samira Ghodratnama",
"Amin Beheshti",
"Mehrdad Zakershahrak"
] | cs.IR | [
"cs.IR",
"cs.AI",
"cs.CL"
] |
A Personalized Reinforcement Learning Summarization Service for
Learning Structure from Unstructured Data
Samira Ghodratnama
Macquarie University, Australia
W.W. Grainger, USA
[email protected]
[email protected]
Amin Behehsti
Macquarie University, Australia
[email protected]
Mehrdad Zakershahrak
Macquarie University, Australia
[email protected]
Received August 12, 2023; accepted August 12, 2023
============================================================================================================================================================================================================================================================================================================================
The exponential growth of textual data has created a crucial need for tools that assist users in extracting meaningful insights. Traditional document summarization approaches often fail to meet individual user requirements and lack structure for efficient information processing. To address these limitations, we propose Summation, a hierarchical personalized concept-based summarization approach. It synthesizes documents into a concise hierarchical concept map and actively engages users by learning and adapting to their preferences. Using a Reinforcement Learning algorithm, Summation generates personalized summaries for unseen documents on specific topics. This framework enhances comprehension, enables effective navigation, and empowers users to extract meaningful insights from large document collections aligned with their unique requirements.
Document summarization, personalized summarization, hierarchical summarization, concept-based summarization.
§ INTRODUCTION
The availability of a vast amount of information on various topics has led to a phenomenon known as information overload, where the volume of data exceeds an individual's capacity for effective processing within a reasonable timeframe.
While this abundance of data can be valuable for analytical applications, it necessitates efficient exploration tools to harness its potential benefits without succumbing to information overload, which can strain cognitive resources.
Data summaries serve as effective tools for gathering relevant information, organizing it into a coherent and manageable form, and facilitating complex question answering, insight generation, and conceptual boundary discovery <cit.>.
Automatic document summarization has been extensively studied to address the challenges of data reduction for analysis, commercialization, management, and personalization purposes.
Furthermore, users often seek information in an organized and coherent structure.
However, despite the speed of document generation and the massive collections of unstructured documents, producing personalized summaries comparable to human-written ones remains challenging.
Most previous work on automatic text summarization has focused on generating textual summaries rather than structured ones.
These approaches typically produce a single, short, general, and flat summary that applies to all users, lacking interpretability and personalization.
Moreover, they are incapable of producing more extended and detailed summaries, even if users express interest in obtaining additional information.
Additionally, the lack of structure in these summaries hampers further processing, and they heavily rely on reference or gold summaries created by humans, which are subjective and costly <cit.>.
To address these limitations, we propose Summation, a hierarchically interactive structured summarization approach that generates personalized summaries.
We emphasize the significance of the following aspects in our contribution: i) Structured summaries, ii) Personalization, iii) Interaction, and iv) The elimination of reference summaries.
Structured Summaries. Studies have demonstrated that when individuals encounter numerous documents, they seldom formulate fully-fledged summaries. Instead, they attempt to extract concepts and understand the relationships among them <cit.>.
Consequently, structured data has become crucial in various domains.
It offers a concise overview of the document collection's contents, unveils interesting relationships, and serves as a navigational structure for further exploration of the documents.
Our approach, Summation, provides summaries in the form of a hierarchical concept map, which caters to diverse user requirements by being interpretable, concise, and simultaneously providing an overview and detailed information.
Personalization. Existing summarization approaches typically generate a generic summary comprising a few selected sentences intended to meet the needs of all users. In contrast to such generic summaries, there is a dearth of user-centric summarization approaches that allow users to specify the desired content in the summaries <cit.>.
Interaction. Conventional summarization approaches treat a topic-related document set as input and generate a summary that captures the most salient aspects. However, research on this topic often neglects the usefulness of the approach for users, focusing primarily on the accuracy of the generated summaries. As a result, these approaches produce short (3-6 sentences), inflexible, and flat summaries that are the same for all users. Consequently, these approaches fail to provide more extensive summaries even when users express interest in obtaining additional information.
Reference Summaries. Traditional document summarization techniques rely on reference summaries created by humans for training their systems. However, this approach is subjective and, more importantly, resource-intensive. For instance, Lin <cit.> reported that creating summaries for the Document Understanding Conferences (DUC) required 3,000 hours of human effort. Personalized summaries eliminate the need for such reference summaries by generating specific summary for a user instead of optimizing a summary for all users.
Our Contribution.
We study the automatic creation of personalized, structured summaries, allowing the user to overview a document collection's content without much reading quickly.
The goal here is to dynamically maintain a federated summary view incrementally, resulting in a unified framework for intelligent summary generation and data discovery tools from a user-centered perspective.
The unique contribution of this paper includes:
* We provide summaries in the form of a hierarchical concept map, labeled graphs representing concepts and relationships in a visual and concise format.
Their structured nature can reveal interesting patterns in documents that users would otherwise need to discover manually.
It enables providing more information than traditional approaches within the same limit size.
It can be used as a navigator in the document collection.
Such visualization is beneficial for decision-making systems.
* We introduce and formalize a theoretically grounded method.
We propose a personalized interactive summarization approach utilizing a reinforcement learning algorithm to learn generating user-adapted results.
It is the first approach to predict users' desired structured summary to the best of our knowledge.
* We provide various evidence evaluating different aspects to prove Summation's usability using human and automatic evaluation.
We divide the proposed framework into two steps.
The first step is organizer which structure unstructured data by making a hierarchical concept map.
Then summarizer is responsible for: i) predicting users' preferences based on the given feedback by employing preference learning and ii) learning to provide personalized summaries by leveraging reinforcement learning.
A general overview of the algorithm is depicted in Figure <ref>.
§ RELATED WORK
We categorize previous approaches into three groups including traditional approaches, structured approaches, personalized and interactive approaches discussed below.
Traditional Approaches.
A good summary should provide the maximum information about the input documents within a size limit and be fluent and natural.
Different aspects for categorizing traditional multi-document summarization approaches exist, such as the input type, the process, and the summarization goal <cit.>.
However, the main category considers the process and the output type of the summarization algorithm: extractive and abstractive approaches.
The input in both cases is a set of documents, and the output is a few sentences.
Abstractive summaries are generated by interpreting the main concepts of a document and then stating those contents in another format.
Therefore, abstractive approaches require deep natural language processing, such as semantic representation and inference <cit.>.
However, extractive text summarization selects some sentences from the original documents as the summary.
These sentences are then concatenated into a shorter text to produce a meaningful and coherent summary <cit.>.
Early extractive approaches focused on shallow features, employing graph structure, or extracting the semantically related words <cit.>.
Different machine learning approaches, such as naive-Bayes, decision trees, neural networks, and deep reinforcement learning models are used for this purpose <cit.>.
Structured Approaches.
While traditional summarization approaches produce unstructured summaries, there exist few attempts on structured summaries.
Structured summaries are defined by generating Wikipedia articles and biographies to extract the significant aspects of a topic using approaches such as topic modeling or an entity-aspect LDA model <cit.>.
Discovering threads of related documents is another category of structured summaries.
They mostly use a machine algorithm to find the threads using a supervised approach and features such as temporal locality of stories for event recognition and time-ordering to capture dependencies <cit.>.
A few papers have examined the relationship between summarization and hierarchies.
However, the concept of hierarchy in these approaches is the relation between different elements of a document.
An example is creating a hierarchy of words or phrases to organize a set of documents <cit.>.
There is a related thread of research on identifying the hierarchical structure of the input documents and generating a summary which prioritizes the more general information according to the hierarchical structure <cit.>.
However, the information unit is a sentence, and the hierarchy is based on time measures.
Concept-based multi-document summarization is a variant of traditional summarization that produces structured summaries using concept maps.
It learns to identify and merge coreferent concepts to reduce redundancy and finds an optimal summary via integer linear programming.
However, it produces a single flat summary for all users <cit.>.
Personalized and Interactive Approaches.
Recently, there exist few recent attempts on personalized and interactive approaches in different NLP tasks.
Unlike non-interactive systems that only present the system output to the end-user, interactive NLP algorithms ask the user to provide certain feedback forms to refine the model and generate higher-quality outcomes tailored to the user.
Multiple forms of feedback also have been studied including mouse-clicks for information retrieval <cit.>, post-edits and ratings for machine translation <cit.>, error markings for semantic parsing <cit.>, and preferences for translation <cit.>.
A significant category of interactive approaches presents the output of a given automatic summarization system to users as a draft summary, asking them to refine the results without further interaction.
The refining process includes cutting, paste, and reorganize the essential elements to formulate a final summary <cit.>.
Other interactive summarization systems include the iNeATS <cit.> and IDS <cit.> systems that allow users to tune several parameters for customizing the produced summaries.
Avinesh and Meyer <cit.> proposed the most recent interactive summarization approach that asks users to label important bigrams within candidate summaries.
Their system can achieve near-optimal performance.
However, labeling important bigrams is an enormous burden on the users, as users have to read through many potentially unimportant bigrams.
Besides, it produces extractive summaries that are unstructured.
§ THE PROPOSED APPROACH (SUMMATION)
The ultimate goal of summarization is to provide a concise, understandable, and interpretable summary tailored to the users' needs.
However, making such a summary is challenging due to massive document collection, the speed of generated documents, and the unstructured format.
In this regard, Summation aims to make structured summaries to facilitate further processes to make it concise and easily understandable while engaging users to create their personalized summaries.
This novel framework has two components: organizer and the summarizer.
First, we discuss the problem definition, and then each component is explained.
Problem Definition.
The input is a set of documents D={D_1,D_2, ... ,D_N} and each document consists of a sequence of sentences S=[s_1,s_2,...,s_n].
Each sentence s_i is a set of concepts {c_1,c_2, ..,c_k}, where a concept can be a word (unigram) or a sequence of words.
The output is a personalized hierarchical concept map.
This novel framework has two components, an organizer and a summarizer, explained in Sec. <ref> and <ref>, respectively.
§.§ Adding Structure to Unstructured Data
The first step is to structure unstructured information by making a hierarchical concept map.
A concept map is a graph with directed edges, where nodes indicate concepts and edges indicate relations.
Both concepts and relations are sequences of related words representing a semantic unit.
Consequently, the first step in creating a concept map is to identify all concepts and relations.
Here, we propose hierarchical clustering to form the hierarchical concept map.
§.§.§ Concept and Relation Extraction.
Concepts come in different syntactic types, including nouns, proper nouns, more complex noun phrases, and verb phrases that describe activities <cit.>.
For this purpose, we used open information extraction (OIE) <cit.> through which the entities and relations are obtained directly from the text.
OIE finds binary propositions from a set of documents in the form of (con_1,R,con_2), which are equivalent to the desired concepts and relations.
For example, the output for the sentence, ‘cancer treatment is underpinned by the Pharmaceutical Benefits Scheme’, is:
Cancer treatment by the Pharmaceutical Benefits Scheme
Balancing precision and recall in extracting concepts is a challenging task.
A high precision causes to define all identified spans as mentions of concepts.
Therefore, some constructions are usually missed, which leads to lowering the recall.
On the other hand, a high recall is necessary since missed concepts can never be in summary.
Obtaining a higher recall may extract too many mentions, including false positives.
Generalizability is also essential.
The reason is that extracting a particular syntactic structure might generate only correct mentions, causing too broad mentions.
Ideally, a proper method applies to many text types.
To avoid meaningless and long concepts, we processed the OIE results such that concepts with less than one noun token or more than five tokens are omitted.
The original nouns also replace pronouns.
If an argument is a conjunction indicating conj-dependency in the parse tree, we split them.
§.§.§ Concept Map Construction.
Among various extracted concepts and relations, multiple expressions can refer to the same concept while not using precisely the same words; that is, they can also use synonyms or paraphrases.
However, distinguishing similar concepts to group them is challenging and subjective.
For example, adding a modifier can completely change the meaning of a concept based on the purpose of summarization.
Consequently, grouping them may lead to propositions that are not stated in the document.
Therefore, we need to group every subset that contains mentions of a single, unique concept.
Scalability is another critical issue.
For example, pairwise comparisons of concepts cause a quadratic run-time complexity applicable only to limited-sized document sets.
The same challenges exist for relation grouping.
However, we first grouped all mentions by the concepts' pairs, and then performed relation grouping.
Therefore, this task’s scope and relevance are much smaller than when concepts are used.
Therefore, in practise, comparison-based quadratic approaches are feasible.
Moreover, as the final goal is to create a defined size summary, the summary size significantly affects the level of details in grouping concepts.
This is because the distinction between different mentions of a concept might not be required, as it is a subjective task.
Ideally, the decision to merge must be made based on the final summary map’s propositions to define the necessary concept granularity.
We further propose hierarchical conceptual clustering using k-means with word embedding vectors to tackle this problem, as it spans a semantic space.
Therefore, word embedding clusters give a higher semantic space, grouping semantically similar word classes under the Euclidean metric constraint defined below.
Before defining the proposed hierarchical conceptual clustering, we review word embedding schemes used in the proposed model.
Word Embedding.
Word embedding is a learnt representation of text such that the same meaning words have similar representations.
Different techniques can be used to learn a word embedding from the text.
Word2Vec <cit.> is an example of a statistical model for learning a word embedding representation from a text corpus, utilising different architectures.
As such, we used skip-gram and bag of character n-grams in our experiments.
The skip-gram model uses the current word for predicting the surrounding words by increasing the weights of nearby context words more than other words using a neural network model.
One drawback of skip-gram is its inability to detect rare words.
In another model, authors define an embedding method by representing each word as the sum of the vector representations of its character n-grams, known as ‘bag of character n-grams’ <cit.>.
If the training corpus is small, character n-grams will outperform the skip-gram (of words) approach. [We used fastText for word embedding: https://fasttext.cc/docs/en/support.html]
Conceptual Hierarchical Clustering. Given word (concept) embeddings learnt from a corpus, {v_w_1,v_w_2,...,v_w_T}, we propose a novel recursive clustering algorithm to form a hierarchical concept map, H.
This variable denotes a set of concept maps organised into a hierarchy that incrementally maintains hierarchical summaries from the most general node (root) to the most specific summary (leaves).
Within this structure, any non-leaf summary generalises the content of its children nodes.
Hierarchical summarization has two critical strengths in the context of large-scale summarization.
First, the initial information under review is small and grows upon users’ request, so as not to overwhelm them.
Second, the parent-to-child links facilitate user navigation and drilling down for more details on interesting topics.
The hierarchical conceptual clustering minimizes the objective function Eq. <ref> over all k clusters as C={c_1,c_2,..,c_k}.
J = ∑_k=1^K∑_t=1^| T | |v_w_t- c_k|^2 +αmin _c∈ C size(c),
where c_k is the randomly selected centre k-th cluster, and T is the number of word vectors.
The second term is the evenness of the clusters, added to avoid clusters with small sizes.
α tunes the evenness factor, which was defined by employing a grid search over a development set.
We also implemented hierarchical clustering top-down at each time, optimising Eq. <ref>.
After defining the clusters, we must find the concept that best represents every concept at the lower levels to ensure hierarchical abstraction.
A concise label is the desired label for each node; however, shortening mentions can introduce propositions that are not asserted by a text.
For example, the concept labelled ‘students’ can change in meaning where the emphasis is on a few students or some students.
To this end, a centre of a cluster at each level of the hierarchy was defined as a label.
The inverse distance to the cluster centres is the membership degree or the similarity to each label.
The cluster distance for a word w_t is defined as d_v_w_t.
Consequently, the membership of each word w_t in cluster c_k to its label is the inverse distance defined in Eq. <ref>.
m_v_w_t=1/d_v_w_t =1/|c_k-v_w_t|^2∀ w_t ∈ c_k
We then fine-tuned K within the 5–50 range based on the dataset size and chose the cluster number according to gap statistic value <cit.>.
The output H can be directly used as a new dataset for other actions, such as browsing, querying, data mining process, or any other procedures requiring a reduced but structured version of data.
The hierarchical clustering can also be pruned at each level to represent a summarised concept map for different purposes or users.
Therefore, H is fed to the summariser for pruning to generate a personalized summary. Moreover, by using preference-based learning and RL, we learn users’ preferences in making personalized summaries for unseen topic-related documents, discussed in Sec. <ref>.
§.§ Summarizer
The hierarchical concept map produced in the previous step is given to the summariser to make the desired summaries for users based on their given preferences.
Therefore, the summariser consists of two phases—(i) predicting user preferences and (ii) generating the desired summary.
§.§.§ Predicting User Preference.
The first step towards creating personalized summaries is to understand users’ interests.
It can be extracted implicitly based on users' profiles, browsing history, likes or dislikes, or retweeting in social media <cit.>.
When this information is not available, interaction with users is an alternative to retrieve user's perspectives.
The user feedback can be in any form, such as mouse-click or post-edits, as explained in Section <ref>.
Preference-based interactive approaches are another form of feedback that puts a lower cognitive burden on human subjects <cit.>.
For instance, asking users to select one concept among “cancer treatment" and “cancer symptoms" is more straightforward than asking for giving a score to each of these concepts.
Therefore, in this paper, to reduce users' cognitive load, queries are in the form of concept preference.
Preference learning is a classification method that learns to rank instances based on the observed preference information.
It trains based on a set of pairwise preferred items and obtaining the total ranking of objects <cit.>.
H is the hierarchical concept map, where at the i-th level of the hierarchy there exist m_i nodes defining a label l.
L={l_11,...,l_nm_i} is the set of all labels, where l_i1 indicates the first node at i-th level of the hierarchy and n is the number of levels, and L_i indicates the labels at i-th level.
We queried users with a set of pairwise concepts at the same levels,
{p(l_i1,l_i2),p(l_i2,l_i3),...,p(l_im_i-1,p(l_im_i)}, where p(l_i1,l_i2) is defined in Eq. <ref>.
p(l_i1,l_i2)=
1, if l_i1>l_i2
0, otherwise
where > indicates the preference of l_i1 over l_i2.
Preference learning aims to predict the overall ranking of concepts, which requires transforms concepts into real numbers, called utility function.
The utility function U such that l_i > l_j U(l_i) > U (l_j), where U is a function U: CR.
In this problem, the ground-truth utility function (U) measures each concept’s importance based on users’ attitudes, defined as a regression learning problem.
According to U, we defined the ranking function, R, measuring the importance of each concept towards other concepts based on users’ attitude.
This is defined in Eq. <ref>.
R(l_i)=∑1{U(l_i)>U(l_j)} , ∀ l_i , l_j∈L
where 1 is the indicator function.
The Bradley–Terry model <cit.> is a probability model widely used in preference learning.
Given a pair of individuals l_i and l_j drawn from some population, the model estimates the probability that the pairwise comparison l_i > l_j is true.
Having n observed preference items, the model approximates the ranking function R by computing the maximum likelihood estimate in Eq. <ref>.
J_x(w)= ∑_i ∈ n[p(l_i,l_j)log F(l_i,l_j;w)+
p(l_j,l_i)log F(l_j,l_i;w)]
where F(l) is the logistic function defined in Eq. <ref>.
F(l_i,l_j;w)= 1/1+exp[U^*(l_j;w)-U^*(l_i;w)]
Here, U^* is the approximation of U parameterised by w, which can be learnt using different function approximation techniques.
In our problem, a linear regression model was designed for this purpose, defined as U(l;w)=w^Tϕ(l), where ϕ(l) is the representation feature vector of the concept l.
For any l_i,l_j ∈ L, the ranker prefers l_i over l_j if w^Tϕ(l_i)> w^Tϕ(l_j).
By maximizing the J_x(w) in Eq. <ref>, w^* = arg max_w J_x(w), the resulting w^* using stochastic gradient ascent optimisation will be used to estimate U^*, and consequently the approximated ranking function R^*: C R.
Thus, Summation learns a ranking over concepts and uses the ranking to generate personalized summaries.
§.§.§ Generating Personalized Summaries.
The summarization task is to transform the input (a cluster of documents) d to the best summary among all possible summaries, called Y(d), for the learnt preference ranking function.
This problem can be defined as a sequential decision-making problem, starting from the root, sequentially selecting concepts and adding them to a draft summary.Therefore, it can be defined as an MDP problem.
An MDP is a tuple (S,A,R,T), where S is the set of states, A is the set of actions, R(s,a) is the reward for performing an action (a) in a state (s), and T is the set of terminal states.
In our problem, a state is a draft summary, and A includes two types of action—either adding a new concept to the current draft summary or terminating the construction process if it reaches users’ limit size.
The reward function R returns an evaluation score in one of the termination states or 0 in other states.
A policy π(s,a): S × A R in an MDP defines the selection of actions in state s.
The goal of RL algorithms is to learn a policy that maximises the accumulated reward.
The learnt policy trained on specific users’ interests is used on unseen data at the test time (in this problem to generate summaries in new and related topic documents).
We defined the reward as the summation of all concepts’ importance included in the summary.
A policyπ defines the strategy to add concepts to the draft summary to build a user’s desired summary.
We defined π as the probability of choosing a summary of y among all possible summaries within the limit size using different hierarchy paths, Y(d), denoted as π(y).
The expected reward of performing policy π, where R(y) is the reward for selecting summary y, is defined in Eq. <ref>.
R^RL(π|d)= E_y ∈ Y(d)R(y)= ∑_y∈ Y(d)π(y)R(y)
The goal of MDP is to find the optimal policy π^* that has the highest expected reward.
Therefore, the optimal policy, π^*, is the function that finds the desired summary for a given input based on user feedback (Eq. <ref>).
π^* = arg max R^RL(π|d) = arg max ∑_y ∈ Y(d)π(y) R(y)
We also used the linear temporal difference algorithm to obtain π^*.
The process is explained in Algorithm <ref>.
§ EVALUATION
In this section, we present the experimental setup for assessing our summarization model's performance.
We discuss the datasets, give implementation details, and explain how system output was evaluated.
§.§ Datasets and Evaluation
We evaluated Summation using three commonly employed benchmark datasets from the Document Understanding Conferences (DUC) [Produced by the National Institute Standards and Technology (https://duc.nist.gov/)].
Each dataset contains a set of document clusters accompanied by several human-generated summaries used for training and evaluation.
Details are explained in Table <ref>
Automatic Evaluation. We evaluate the quality of summaries using ROUGE_N measure <cit.>[We run ROUGE 1.5.5: http://www.berouge.com/Pages/defailt.aspx with parameters -n 2 -m -u -c 95 -r 1000 -f A -p 0.5 -t 0] defined as:
The three variants of ROUGE (ROUGE-1, ROUGE-2, and ROUGE-L) are used.
We used the limited length ROUGE recall-only evaluation (75 words) to avoid being biased.
Human Evaluation. For this purpose, we hired fifteen Amazon Mechanical Turk (AMT)[https://www.mturk.com/] workers to attend tasks without any specific prior background required.
Then five document clusters are randomly selected from the DUC datasets.
Each evaluator was presented with three documents to avoid any subjects' bias and was given two minutes to read each article.
To make sure human subjects understood the study's objective, we asked workers to complete a qualification task first.
They were required to write a summary of their understanding.
We manually removed spam from our results.
§.§ Results and Analysis
Summation was evaluated from different evaluation aspects, first from the organiser’s output, and then concerning the hierarchical concept map (H), which can be served individually to users as the structured summarised data.
Next, we evaluated H using both human and automatic evaluation techniques to answer the following questions:
* Do users prefer hierarchical concept maps to explore new and complex topics?
* How much do users learn from a hierarchical concept map?
* How coherent is the produced hierarchical concept map?
* How informative are summaries in the form of a hierarchical concept map?
Personalized summaries generated on test data were also evaluated from various perspectives to analyse the effect of RL and preference learning, including:
* The impact of different features in approximating the proposed preference learning.
* The role of the query budget in retrieving pairwise preferences.
* The performance of RL algorithm and the information coverage in terms of ROUGE.
* Users' perspectives on learned summaries based on their given feedback.
Hierarchical Concept Map Evaluation.
To answer the questions in Sec. <ref>, we performed three experiments.
First, within the same limit size as the reference summaries, we compared the summaries produced by three models—using ExDos, which is a traditional approach; using a traditional hierarchical approach <cit.>; and using a structured summarization approach <cit.> on selected documents (with ROUGE-1 and ROUGE-2 scores based on the reference summaries).
The average ROUGE-1 for Summation was 0.65 and ROUGE-2 was 0.48.
The structured approach <cit.> showed similar performance with ROUGE-1 and ROUGE-2 at 0.65 and 0.45, respectively.
Meanwhile, traditional hierarchical approaches <cit.> produced a ROUGE-1 of 0.27 and ROUGE-2 of 0.18.
In the same task, the percentage of covered unigrams and bigrams based on documents were also compared.
Both Summation and the structured approach covered approximately 4% unigrams and 2% bigrams, but dropped below 1% in both cases when testing the hierarchical approaches.
In the third experiment, all competitors’ outputs were rated based on three measures, including usability in exploring new topics, level of informativeness, and coherency.
Summation’s rate for the first and second criteria was 96% and 94%, respectively.
However, it was 34% for coherency.
We removed all concepts with low similarity to their parents based on a different threshold at each level.
After repeating the same experiment, and rate of coherency increased to 76%.
Feature Analysis.
Before evaluating the effect of conceptual preference, it is important to explain the ground-truth concept ranker function (U) and the approximate function (U^*), indicating the importance of concepts.
To estimate the approximate function (U^*), we defined a linear model U^*(c)=W^Tϕ(c), where ϕ are the features.
To this end, a set of features (whose importance was validated in ExDos) was used, including surface-level and linguistic-level features.
Surface-level features include frequency-based features (TF-IDF, RIDF, gain and word co-occurrence), word-based features (upper-case words and signature words), similarity-based features (Word2Vec and Jaccard measure) and named entities.
Linguistic features are generated using semantic graphs and include the average weights of connected edges, the merge status of concepts as a binary feature, the number of concepts merged with a concept, and the number of concepts connected to the concept.
We defined different combinations of features with different sizes,{2,5,8,10}, starting from the most critical one.
Then, we repeated the experiments for 10 cluster documents.
We used the concepts included in the reference summary as preferences, and then evaluated the concept coverage in a concept map compared to the reference summaries using ROUGE-1 and ROUGE-2.
The results reported in Fig. <ref> show that the model’s performance improved after adding more features.
Summary Evaluation.
To avoid subjectivity in the evaluation process, we used the reference summaries as feedback.
The mentioned concepts that exist in reference summaries receive the maximum score by the ranked function.
We compared the summaries produced by three models, including the traditional approach (ExDos), a range of hierarchical approaches <cit.>, and a structured summarization approach <cit.>, each tested on randomly selected documents from three datasets using ROUGE-1, ROUGE-2 and ROUGE-L scores based on the references summaries.
The average results reported in Table <ref> show the supremacy of Summation in selecting specific contents.
Query Budget Size. We also measure the effectiveness of the users' query budget size in the process.
The pairwise preferences are defined based on the reference summaries, defining in a dictionary format.
We selected the query size among the selection of {10,15,20,25,30,35}, demonstrating the user's number of feedback.
The results are reported in Figure <ref>.
As expected, by increasing the number of feedback, the ROUGE score increases significantly.
However, the difference rate decreases through the process.
Human Analysis.
Since the goal of Summation is to help users make their desired summary, we conducted two human experiments to evaluate the model.
In the first experiments, to assess the possibility of finding their desired information, they were asked to answer a given question about each topic.
Their level of confidence in answering questions and their answers were recorded.
An evaluator assessed their accuracy in answering questions.
Among the fifteen workers, 86.67% were completely confident in their answers.
However, 57% answered completely accurately.
In another task, after querying users for feedback, we ask them to select some concepts as the summary for the test data.
Then the outputs were also shown to users, and they all approved their satisfaction.
Besides, an evaluator manually compared them and reported more than 80% correlation between outputs.
§ CONCLUSION AND FUTURE WORK
Extensive information in various formats is producing from single or multiple simultaneous sources in different systems and applications.
For instance, data can be structured, such as data in SQL databases, unstructured stored in NoSQL systems, semi-structured like web server logs, or streaming data from a sensor.
We propose a summarization approach based on a hierarchical concept map to tackle the variety and volume of big generated data.
We trained our approach using document collections as input and employed users' feedback to generate desired summaries for users, which can be extended to other data types.
Many future directions are possible.
First, capturing users' interests is a significant challenge in providing practical personalized information.
The reason is that users are reluctant to specify their preferences as entering lists of interests may be a tedious and time-consuming process.
Therefore, techniques that extract implicit information about users' preferences are the next step for making useful personalized summaries.
Another potential direction is to use human feedback records to provide personalized summaries on new domains using transfer learning.
Moreover, we aim to use fuzzy clustering to make a hierarchical concept map.
§ ACKNOWLEDGEMENT
We acknowledge the Centre for Applied Artificial Intelligence at Macquarie University, Sydney, Australia, for funding this research.
IEEEtran
|
http://arxiv.org/abs/2307.07247v2 | 20230714094146 | Two-Sample Test with Copula Entropy | [
"Jian Ma"
] | stat.ME | [
"stat.ME"
] |
Rigorous Runtime Analysis of Diversity Optimization with GSEMO on OneMinMax
Denis Antipov
Optimisation and Logistics
School of Computer Science
The University of Adelaide
Adelaide, Australia
Aneta Neumann
Optimisation and Logistics
School of Computer Science
The University of Adelaide
Adelaide, Australia
Frank Neumann
Optimisation and Logistics
School of Computer Science
The University of Adelaide
Adelaide, Australia
August 12, 2023
=========================================================================================================================================================================================================================================================================================================================================================================
In this paper we propose a two-sample test based on copula entropy (CE). The proposed test statistic is defined as the difference between the CEs of the null hypothesis and the alternative. The estimator of the test statistic is proposed with the non-parametric estimator of CE, which is non-parametric and hyperparameter-free. Simulation experiments verified the effectiveness of the proposed test and compared it with three other multivariate non-parametric two-sample tests on the simulated bi-variate normal or bi-variate Gaussian copula data. Experimental results show that the proposed test works well on all three simulations and presents competitive or better performance than the other three tests.
Keywords: Copula Entropy; Two-Sample Test; Hypothesis Test; Non-Parametric Method
§ INTRODUCTION
Two-sample test is one of the most common problems in statistics. It tests with a statistic the hypothesis that two samples are from the same probability distribution or not. There are many existing methods for this, such as the two-sample T-test, Kolmogorov-Smirnov test, Pearson's chi-squared test, Kernel-based test, etc. However, each test has its own drawbacks. For example, two-sample T-test requires the normal assumption. If the samples are non-normal, one needs to perform a non-parametric test. The Kolmogorov-Smirnov test <cit.> is such a non-parametric test. However, it works only for uni-variate cases. The other well-known tests are kernel two-sample test <cit.> which defines its test statistic with kernel tricks, and the test based on energy statistics <cit.>. The drawback of the kernel-based test is its need for hyperparameter tuning. Both kernel-based and energy statistics based tests have high computational complexity. Recently, a test based on mutual information was proposed, which shows good performance over the other methods <cit.>. Copula is the probabilistic theory for representing dependence and a two-sample test for tail copulas was proposed recently for financial applications <cit.>.
Copula Entropy (CE) is a kind of Shannon entropy defined with copula function <cit.>. It has been proven that mutual information is equivalent to negative CE. CE is a multivariate measure of statistical independence with several good properties, such as being symmetric, non-positive (0 iff independent), invariant to monotonic transformation, and particularly, equivalent to correlation coefficient in Gaussian cases. Previously, CE has been applied to multivariate normality test, in which a test statistic based on CE is proposed <cit.>.
In this paper, we propose a two-sample test with CE. The test statistic is defined as the difference between the CEs of the null hypothesis and the alternative. This is different from the previously proposed test based on MI, which only considers the null hypothesis in its test statistic <cit.>. Another merit of our test is that the proposed test statistic can be easily estimated from data with the non-parametric estimator of CE, which makes the proposed test both non-parametric and hyperparameter-free. We evaluated the proposed test and compared it with three other multivariate non-parametric two-sample tests, including the MI-based test, the kernel-based test, and the energy statistics-based test, on simulated bi-variate normal or bi-variate Gaussian copula data.
This paper is organized as follows: Section <ref> introduces the basic theory of CE; the proposed test statistic and its estimator are presented in Section <ref>; simulation experiments will be presented in Section <ref>; and experimental results will be presented in Section <ref>; some discussion are presented in Section <ref>; Section <ref> concludes the paper.
§ COPULA ENTROPY
Copula theory is a probabilistic theory on representation of multivariate dependence <cit.>. According to Sklar's theorem <cit.>, any multivariate density function can be represented as a product of its marginals and copula density function (cdf) which represents dependence structure among random variables.
With copula theory, Ma and Sun <cit.> defined a new mathematical concept, named Copula Entropy, as follows:
Let 𝐗 be random variables with marginals 𝐮 and copula density function c. The CE of 𝐗 is defined as
H_c(𝐱)=-∫_𝐮c(𝐮)log c(𝐮)d𝐮.
A non-parametric estimator of CE was also proposed in <cit.>, which composed of two simple steps:
* estimating empirical copula density function;
* estimating the entropy of the estimated empirical copula density.
The empirical copula density in the first step can be easily derived with rank statistic. With the estimated empirical copula density, the second step is essentially a problem of entropy estimation, which can be tackled with the KSG estimation method <cit.>. In this way, a non-parametric method for estimating CE was proposed <cit.>.
§ TWO-SAMPLE TEST WITH CE
Given two samples 𝐗_1 ={X_11,⋯,X_1m}∼ P_1, 𝐗_2 ={X_21,⋯,X_2n}∼ P_2, the null hypothesis for two sample test is
H_0: P_1 = P_2,
and the alternative is
H_1: P_1 ≠ P_2.
where 𝐗_1,𝐗_2 ∈ R^d and P_1,P_2 are the corresponding probability distribution functions.
Let 𝐗 = (𝐗_1, 𝐗_2) and Y_0,Y_1 be two labeling variables for the two hypotheses respectively that Y_1=(0_1,⋯,0_m, 1_1, ⋯, 1_n) and Y_0=(1_1,⋯,1_m+n). So the CE between 𝐗 and Y_i can be calculated as
H_c(𝐗;Y_i) = H_c(𝐗,Y_i)-H_c(𝐗).
Then the test statistic for H_0 is defined as the difference between the CEs of the two hypotheses, as follows:
T_ce=H_c(𝐗,Y_0)-H_c(𝐗,Y_1).
It is easy to know T_ce will be a small value if H_0 is true and a large value if H_1 is true.
The proposed test statistic can be easily estimated from data by estimating the two terms in (<ref>) with the non-parametric estimator of CE. Since the CE estimator is non-parametric, the proposed estimator of the test statistic can be applied to any cases without assumptions. Another merit of the proposed estimator is hyperparameter-free.
§ SIMULATION EXPERIMENTS
We evaluate the proposed test statistic with simulated data and compare it with three multivariate non-parametric tests, including the MI-based test <cit.>, kernel-based test <cit.>, and energy statistics-based test <cit.>. Three simulation experiments are designed to compare them in different situations. The sample size is 500 in all the simulations. The non-parametric CE estimator was also used to estimate MI in the MI-based test. The Gaussian kernel with scale parameter δ = 1 was used for the kernel-based test.
In the first simulation, we generated a sample from bi-variate normal distribution with zero mean (0,0) and variance ρ = 0.5. This sample is taken as a reference. We then generated 10 samples with a shifted mean from the zero mean of the reference sample (i,i),i=0,⋯,9, and the same variance of the above normal distribution. The above four two-sample tests were conducted on the reference sample and each newly generated sample with shifted mean. Their test statistics were estimated from the simulated data.
In the second simulation, a reference sample was first generated from a bi-variate normal distribution with zero mean and variance ρ = 0. Then 10 samples were generated with the same mean and a variance that increases from 0 to 0.9 by step 0.1, which means an increasing difference with the reference sample. The test statistics of the four two-sample tests were then estimated from the reference sample with each generated sample.
In the third simulation, a reference sample was generated as the same in the second simulation. Then 10 samples were generated with a bi-variate Gaussian copula associated with a standard normal marginal distribution and an exponential marginal distribution with rate as 0.5. The ρ of the Gaussian copula increases from 0.1 to 1 by step 1, which also means an increasing difference with the reference sample. The test statistics of the four tests were estimated from the reference sample with each sample generated from the Gaussian copula.
§ RESULTS
The reference sample and the newly generated samples of the first simulation are shown in Figure <ref>, from which it can be learned that the generated samples are shifted away from the reference sample gradually. This means that H_0 is true at first while is false at last. The estimated test statistics of the four tests are shown in Figure <ref>. It can be learned from it that the estimated statistic of the proposed test is close to 0 at first and then becomes larger as the mean shifts and remain stable high as the generated sample shifted away from the reference sample. The MI-based test and kernel-based test share similar results while the energy statistics-based test presents a different result.
The estimated statistics of the four tests in the second simulation are shown in Figure <ref>. It can be learned from it that the estimated statistics of the four tests are all increased as the ρ of the normal distribution increases. Compared with the proposed test, the MI-based test presents a much unstable result.
The estimated statistics of the four tests in the third simulation are shown in Figure <ref>. It can be learned from it that the proposed test and the kernel-based test present a result with increasing statistics that can reflect the relationship between the reference sample and the sample generated from the Gaussian copula with increasing covariance parameter while both the MI-based test and the energy statistics-based test fail to present a reasonable result.
§ DISCUSSION
In this paper we propose a statistic for two-sample test based on CE. This is closely related to the MI-based test since MI is equivalent to negative CE. The difference between ours and the MI-based statistic is that our proposed statistic is defined as the difference between the CEs of the null hypothesis and the alternative while the MI-based statistic is naively defined as only the MI of the null hypothesis. Simulation experiments show that our test can not only present more stable estimation results than the MI-based test but can also present reasonable results in the situations with non-Gaussianity where the MI-based test fails. Simulation results also show that the estimated statistic of our test is close to zero while that of the MI-based test has non-zero bias when H_0 is true in the first two simulations.
We compared our test with the kernel-based test and the energy statistics-based test with simulated data. The simulation results show that our test presents similar results as kernel-based test in Gaussian cases while different result with kernel-based test in non-Gaussian cases. As for the energy statistics-based test, it presented a linearly increasing statistic and therefore fails to reflect the stable difference when the newly generated sample has shifted away from the reference sample in the first simulation. It also fails to present a result that reflects the increasing difference between two samples in the third simulation. These mean that our test is competitive with the kernel-based test and better than the energy statistics-based test.
§ CONCLUSIONS
In this paper we propose a two-sample test based on CE. The proposed test statistic is defined as the difference between the CEs of the null hypothesis and the alternative. The estimator of the test statistic is proposed with the non-parametric estimator of CE. Simulation experiments verify the effectiveness of the proposed test and compare it with three other multivariate non-parametric two-sample tests on three simulated bi-variate normal or bi-variate Gaussian copula data. Experimental results show that the proposed test works well on all three simulations and presents competitive or better performance than the other three tests.
unsrt
|
http://arxiv.org/abs/2307.04305v1 | 20230710020443 | Automatic Piano Transcription with Hierarchical Frequency-Time Transformer | [
"Keisuke Toyama",
"Taketo Akama",
"Yukara Ikemiya",
"Yuhta Takida",
"Wei-Hsiang Liao",
"Yuki Mitsufuji"
] | cs.SD | [
"cs.SD",
"cs.LG",
"eess.AS"
] |
The category of reduced imaginary Verma modules
Juan Camilo Arias, Vyacheslav Futorny and André de Oliveira
August 12, 2023
===============================================================
Taking long-term spectral and temporal dependencies into account is essential for automatic piano transcription.
This is especially helpful when determining the precise onset and offset for each note in the polyphonic piano content.
In this case, we may rely on the capability of self-attention mechanism in Transformers to capture these long-term dependencies in the frequency and time axes.
In this work, we propose hFT-Transformer, which is an automatic music transcription method that uses a two-level hierarchical frequency-time Transformer architecture.
The first hierarchy includes a convolutional block in the time axis, a Transformer encoder in the frequency axis, and a Transformer decoder that converts the dimension in the frequency axis.
The output is then fed into the second hierarchy which consists of another Transformer encoder in the time axis.
We evaluated our method with the widely used MAPS and MAESTRO v3.0.0 datasets, and it demonstrated state-of-the-art performance on all the F1-scores of the metrics among Frame, Note, Note with Offset, and Note with Offset and Velocity estimations.
§ INTRODUCTION
Automatic music transcription (AMT) is to convert music signals into symbolic representations such as piano rolls, Musical Instrument Digital Interface (MIDI), and musical scores <cit.>.
AMT is important for music information retrieval (MIR), its result is useful for symbolic music composition, chord progression recognition, score alignment, etc.
Following the conventional methods <cit.>, we estimate the frame-level metric and note-level metrics as follows: (1) Frame: the activation of quantized pitches in each time-processing frame, (2) Note: the onset time of each note, (3) Note with Offset: the onset and offset time of each note, and (4) Note with Offset and Velocity: the onset, offset time, and the loudness of each note.
For automatic piano transcription, it is important to analyze several harmonic structures that spread in a wide range of frequencies, since piano excerpts are usually polyphonic.
Convolutional neural network (CNN)-based methods have been used to aggregate harmonic structures as acoustic features.
Most conventional methods apply multi-layer convolutional blocks to extend the receptive field in the frequency axis.
However, the blocks often include pooling or striding to downsample the features in the frequency axis.
Such a downsampling process may reduce the frequency resolution <cit.>.
It is worth mentioning, many of these methods use 2-D convolutions, which means the convolution is simultaneously applied in the frequency and time axes.
The convolution
in the time axis works as a pre-emphasis filter to model the temporal changes of the input signals.
Up to now, recurrent neural networks (RNNs), such as gated recurrent unit (GRU) <cit.> and long short-term memory (LSTM) <cit.>, are popular for analyzing the temporal sequences of acoustic features.
However, recently some of the works start to use Transformer <cit.>, which is a powerful tool for analyzing sequences, in AMT tasks.
Ou et al. <cit.> applied a Transformer encoder along the time axis and suggested that using Transformer improves velocity estimation.
Hawthorne et al. <cit.> used a Transformer encoder-decoder as a sequence-to-sequence model for estimating a sequence of note events from another sequence of input audio spectrograms.
Their method outperformed other methods using GRUs or LSTMs.
Lu et al. <cit.> proposed a method called SpecTNT to apply Transformer encoders in both frequency and time axes and reached state-of-the-art performance for various MIR tasks such as music tagging, vocal melody extraction, and chord recognition.
This suggests that such a combination of encoders helps in characterizing the broad-scale dependency in the frequency and time axes.
However, SpecTNT aggregates spectral features into one token, and the process in its temporal Transformer encoder is not independent in the frequency axis.
This inspires us to incorporate Transformer encoders in the frequency and time axes and make the spectral information available for the temporal Transformer encoder.
In addition, we usually divide the input signal into chunks since the entire sequence is often too long to be dealt at once.
However, this raises a problem that the estimated onset and offset accuracy fluctuates depending on the relative position in the processing chunk.
In our observation, the accuracy tends to be worse at both ends of the processing chunk.
This motivates us to incorporate extra techniques during the inference time to boost the performance.
In summary, we propose hFT-Transformer, an automatic piano transcription method that uses a two-level hierarchical frequency-time Transformer architecture.
Its workflow is shown in Figure <ref>.
The first hierarchy consists of a one-dimensional (1-D) convolutional block in the time axis, a Transformer encoder in the frequency axis, and a Transformer decoder in the frequency axis.
The second hierarchy consists of another Transformer encoder in the time axis.
In particular, the Transformer decoder at the end of the first hierarchy converts the dimension in the frequency axis from the number of frequency bins to the number of pitches (88 for piano).
Regarding the issue of the location dependent accuracy fluctuation in the processing chunks, we propose a technique which halves the stride length at inference time.
It uses only the result of the central part of processing chunks, which will improve overall accuracy.
Finally, in Section <ref>, we show that our method outperforms other piano transcription methods in terms of F1 scores for all the four metrics.
A implementation of our method is available here[].
§ RELATED WORK
Neural networks, such as CNNs, RNNs, generative adversarial networks (GANs) <cit.>, and Transformers have been dominant for AMT.
Since Sigtia et al. <cit.> proposed the first method to use a CNN to tackle AMT, CNNs have been widely used for the methods of analyzing the spectral dependency of the input spectrogram <cit.>.
However, it is difficult for CNNs to directly capture the harmonic structure of the input sound in a wide range of frequencies, as convolutions are used to capture features in a local area.
Wei et al. <cit.> proposed a method of using harmonic constant-Q transform (CQT) for capturing the harmonic structure of piano sounds.
They first applied a 3-Dimensional CQT,
then applied multiple dilated convolutions with different dilation rates to the output of CQT.
Because the dilation rates are designed to capture the harmonics, the performance of Frame and Note accuracy reached state-of-the-art.
However, the dilation rates are designed specifically for piano.
Thus, the method is not easy to adapt to other instruments.
For analysis of time dependency, Kong et al. <cit.> proposed a method that uses GRUs.
Howthorner et al. <cit.>, Kwon et al. <cit.>, Cheuk et al. <cit.>, and Wei et al. <cit.> proposed methods that use bi-directional LSTMs for analysis.
Ou et al. <cit.> used a Transformer encoder to replace the GRUs in Kong et al.'s method <cit.>, and showed the effectiveness of the Transformer.
Usually, the note onset and offset are estimated in each frequency and time-processing frame grid, then paired as a note for note-level transcription by post-processing algorithms such as <cit.>.
However, compared to heuristically designed algorithms, end-to-end data-driven methods are often preferred.
For example, Keltz et al. <cit.> applied a seven-state hidden Markov model (HMM) for the sequence of attack, decay, sustain, and release to achieve note-level transcription.
Kwon et al. <cit.> proposed a method of characterizing the output of LSTM as a five-state statement (onset, offset, re-onset, activate, and inactivate).
Hawthorne et al. <cit.> proposed a method of estimating a sequence of note events, such as note pitch, velocity, and time, from another sequence of input audio spectrograms using a Transformer encoder-decoder.
This method performs well in multiple instruments with the same model <cit.>.
Yan et al. <cit.> proposed a note-wise transcription method for estimating the interval between onset and offset.
This method shows state-of-the-art performance in estimating Note with Offset and Note with Offset and Velocity.
However, the performance in estimating Frame and Note is worse than that of Wei et al.'s method <cit.>.
§ METHOD
§.§ Configuration
Our proposed method aims to transcribe N frames of the input spectrogram into N frames of the output piano rolls (frame, onset, offset, and velocity) as shown in Figure <ref>, where N is the number of frames in each processing chunk.
Each input frame is composed of a log-mel spectrogram having size (F, M+1+M), where F is the number of frequency bins, and M is the size of the forward margin and that of the backward margin.
To obtain the log-mel spectrogram, we first downmix the input waveform into one channel and resample them to 16 kHz.
Then, the resampled waveform is transformed into a mel spectrogram with class in the library <cit.>.
For the transformation, we use hann window, setting the window size as 2048, fast-Fourier-transform size as 2048, F as 256, padding mode as constant, and hop-size as 16 ms.
The magnitude of the mel spectrogram is then compressed with a log function.
§.§ Model Architecture and Loss Functions
The model architecture of our proposed method is shown in Figure <ref>.
We first apply a convolutional block to the input log-mel spectrogram, the size of which is (B, N, F, M+1+M) where B is the batch size.
In the convolutional block, we apply a 1-D convolution in the M+1+M dimension.
After this process, the data are embedded with a linear module.
The embedded vector is then processed with the first Transformer encoder in the frequency axis.
The self-attention is processed to analyze the dependency between spectral features.
The positional information is designated as [0, 1, ..., F-1].
These positional values are then embedded with a trainable embedding.
These are processed in the frequency axis only, thus completely independent to the time axis (N dimension).
Next, we convert the frequency dimension from F to the number of pitches (P).
A Transformer decoder with cross-attention is used as the converter.
The Transformer decoder calculates the cross-attention between the output vectors of the first Transformer encoder and another trainable positional embedding made from [0, 1, ..., P-1].
The decoded vectors are then converted to the outputs of the first hierarchy with a linear module and a sigmoid function (hereafter, we call these outputs output_1st).
Regarding the loss calculation for the outputs, frame, onset, and offset are calculated with binary cross-entropy, and velocity is calculated with 128-category cross-entropy.
The losses can be summarized as the following equations:
L_bce^<m> =∑_n=0^N-1∑_p=0^P-1l_bce(y_n,p^<m>,ŷ_n,p^<m>),
L_cce^velocity =∑_n=0^N-1∑_p=0^P-1l_cce(y_n,p^velocity,ŷ_n,p^velocity),
L =L_bce^frame+L_bce^onset+L_bce^offset+L_cce^velocity,
where <m> is the placeholder for each output (frame, onset, and offset), l_bce and l_cce denote the loss function for binary cross-entropy and categorical cross-entropy, respectively, and y and ŷ denote the ground truth and predicted values of each output (frame, onset, offset, and velocity), respectively.
Although it is intuitive to apply the mean squared error (MSE) for velocity, we found that using the categorical cross-entropy yields much better performance than the MSE from a preliminary experiment.
Finally, the output of the converter is processed with another Transformer encoder in the time axis.
The self-attention is used to analyze the temporal dependency of features in each time-processing frame.
A third positional embedding made from [0, 1, ..., N-1] is used here.
Then, similar to the first hierarchy, the outputs of the second hierarchy are obtained through a linear module and a sigmoid function.
We call these outputs of the second hierarchy as output_2nd hereafter.
The losses for the output_2nd are evaluated in the same way as those for output_1st.
These losses are summed with the coefficients α_1st and α_2nd as follows:
L_all=α_1stL_1st+α_2ndL_2nd.
Although both outputs are used for computing losses during training, only output_2nd is used in inference.
As Chen et al. <cit.> suggested that the performance of their method of calculating multiple losses outperformed the method that uses single loss only, it hints us that utilizing both output_1st and output_2nd in training has the potential to achieve better performance.
§.§ Inference Stride
As mentioned in Section <ref>, chunk-based processing is required because the input length is limited due to system limitations, such as memory size and acceptable processing delay.
We found that the estimation error tends to increase at certain part within each processing chunk.
This can be demonstrated by evaluating the error for each instance of time n within the chunks:
𝑒𝑟𝑟𝑜𝑟_n^<m>=1/IP∑_i=0^I-1∑_p=0^P-1(y_i,n,p^<m>-ŷ_i,n,p^<m>)^2,
where <m> is the placeholder for each output (frame, onset, offset, and velocity), and I is the number of processing chunks over the test set.
The result using our proposed model trained using the MAESTRO training set (described in Section <ref>) is shown in Figure <ref>.
Here, the error 𝑒𝑟𝑟𝑜𝑟_n^<m> is calculated using the MAESTRO test set.
In the figure, we observe a monotonic decrease for frame and a similar but much weaker trend for onset and offset. However, for velocity, no such trend can be observed.
This hints us to use only the middle portion of a processing chunk as the output to reduce the error rate. We call this as the half-stride strategy, since a 50% overlap is required for processing chunks, as shown in Figure <ref> (B).
§ EXPERIMENTS
§.§ Datasets
We use two well-known piano datasets for the evaluation.
The MAPS dataset <cit.> consists of CD-quality recordings and corresponding annotations of isolated notes, chords, and complete piano pieces.
We use the full musical pieces and the train/validation/test split as stated in <cit.>.
The number of recordings and the total duration in hours in each split are 139/71/60 and 8.3/4.4/5.5, respectively.
The MAESTRO v3.0.0 dataset <cit.> includes about 200 hours of paired audio and MIDI recordings from ten years of the International Piano-e-Competition.
We used the train/validation/test split configuration as provided.
In each split, the number of recordings and total duration in hours are 962/137/177 and 159.2/19.4/20.0, respectively.
For both datasets, the MIDI data have been collected by Yamaha Disklaviers concert-quality acoustic grand pianos integrated with a high-precision MIDI capture and playback system.
§.§ Model Configuration
Regarding our model architecture depicted in Figure <ref>, we set N as 128, M as 32, F as 256, P as 88, the CNN channels (C) as 4, size of the CNN kernel (K) as 5, and embedding vector size (Z) as 256.
For the Transformers, we set the feed-forward network vector size as 512, the number of heads as 4, and the number of layers as 3.
For training, we used the following settings: a batch size of 8, learning rate of 0.0001 with Adam optimizer<cit.>, dropout rate of 0.1, and clip norm of 1.0.
in is used for learning rate scheduling with default parameters.
We set α_1st and α_2nd as 1.0, which were derived from a preliminary experiment (see Section <ref>).
We trained our models for 50 epochs on MAPS dataset and 20 epochs for MAESTRO dataset using one NVIDIA A100 GPU.
It took roughly 140 minutes and 43.5 hours to train one epoch with our model for MAPS and MAESTRO, respectively.
The best model is determined by choosing the one with the highest F1 score in the validation stage.
In order to obtain high-resolution ground truth for onset and offset, we followed the method in Kong et al. <cit.>.
We set J, the hyper-parameter to control the sharpness of the targets, to 3.
Also, the label of velocity is set only when an onset is present.
We set the threshold as 0.5, which means if the onset is smaller than 0.5, the velocity is set as 0.
§.§ Inference
At inference time, we use output_2nd as the final output.
We set the threshold for frame as 0.5.
For note-wise events (onset, offset, and velocity), the outputs in each pitch-frame grid are converted to a set containing note-wise onset, offset, and velocity following Kong et al.'s Algorithm 1 <cit.> in five steps shown below:
Step 1. onset detection: find a local maximum in onset with a value at least 0.5. Then calculate the precise onset time using the values of the adjacent three frames <cit.>.
Step 2. velocity: If an onset is detected in Step 1, extract the velocity value at the frame. If the value is zero, then discard both onset and velocity at this frame.
Step 3. offset detection with offset: find a local maximum in offset with a value at least 0.5. Then calculate the precise offset time using the values of the adjacent three frames <cit.>.
Step 4. offset detection with frame: choose the frame that is nearest to the detected onset which has a frame value below 0.5.
Step 5. offset decision: choose the smaller value between the results of Step 3 and 4.
An example is shown in Figure <ref>.
The onset is 4.003, and the velocity is 61.
For offset, the direct estimation from offset is 4.043, and that estimated via frame is 4.064.
Thus, we choose 4.043 as offset.
Finally, we obtain a note with {onset: 4.003, offset: 4.043, velocity: 61} in the output.
§.§ Metrics
We evaluate the performance of our proposed method with frame-level metrics (Frame) and note-level metrics (Note, Note with Offset, and Note with Offset & Velocity) with the standard precision, recall, and F1 scores.
We calculated these scores using library <cit.> with its default settings.
The scores were calculated per recording, and the mean of these per-recording scores was presented as the final metric for a given collection of pieces, as explained in Hawthorne et al. <cit.>.
§.§ Results
Tables <ref> and <ref> show the scores on the test sets of MAPS and MAESTRO datasets.
The numbers of parameters in these Tables are referred from <cit.>.
For the MAPS dataset, our proposed method outperformed the other methods in F1 score for all metrics.
For the MAESTRO dataset, our proposed method outperformed the other methods in F1 score for Note, Note with Offset, and Note with Offset & Velocity.
Furthermore, our method with the half-stride strategy which is mentioned in <ref> outperformed other methods in all metrics.
In contrast, the two state-of-the-art methods for MAESTRO, which are Semi-CRFs <cit.> and HPPNet-sp <cit.>, performed well only on a subset of the metrics.
The results suggest that the proposed two-level hierarchical frequency-time Transformer structure is promising for AMT.
§.§ Ablation Study
To investigate the effectiveness of each module in our proposed method, we trained various combinations of those modules using the MAPS training set and evaluated them using the MAPS validation set.
The variations are shown in Table <ref>.
In this study, we call our proposed method 1-F-D-T, which means it consists of the 1-D convolution block, the first Transformer encoder in the Frequency axis, the Transformer Decoder, and the second Transformer encoder in the Time axis.
Table <ref> shows evaluation results for each variation.
Second Transformer encoder in time axis.
To verify the effectiveness of the second Transformer encoder, we compared the 1-F-D-T and the model without the second Transformer encoder (1-F-D-N).
For the 1-F-D-N model, we use output_1st in both training and inference stages as the final output.
The result indicates that the second Transformer encoder improved Note with Offset performance, in which the F1 score is 84.42 for 1-F-D-T and 80.23 for 1-F-D-N.
This shows the effectiveness of the second Transformer encoder as it provides an extra pass to model the temporal dependency of acoustic features, which is presumably helpful in offset estimation.
Compelxity of convolutional block.
To investigate how the complexity of the convolutional block affects the AMT performance, we compared the 1-F-D-T model and the model that replaces the 1-D convolutional block with a 2-D convolutional block (2-F-D-T).
Surprisingly, the result shows that the performance of the 2-F-D-T model is significantly worse than that of the 1-F-D-T model.
This is probably because the two modules working on the spectral dependency do not cohere with each other.
The 2-D convolutional block may over aggregate the spectral information thus resulting into an effectively lower frequency resolution. Then, the Transformer encoder can only evaluate the spectral dependency over an over-simplified feature space, causing the performance degradation.
Converter.
We used a Transformer decoder to convert the dimension in the frequency axis from F to P.
In contrast, almost all of the existing methods used a linear module to achieve this.
We compared the performance of the 1-F-D-T model to a model with the Transfomer decoder replaced with a linear converter (1-F-L-T).
The result indicates that the 1-F-D-T model outperformed the 1-F-L-T model in F1 score for all four metrics.
Especially, the difference in Note with Offset and Velocity is large (75.95 for the 1-F-D-T model and 69.34 for the 1-F-L-T model in F1 score).
This suggests that using a Transformer decoder as converter is an effective way of improving the performance, although the side effect is the increase of model size.
We also investigated how the coefficients for the loss functions, α_1st and α_2nd in Eqn (<ref>), affect the performance.
We investigated six pairs of coefficients of loss functions (α_1st, α_2nd) in Eqn (<ref>), i.e., (1.8, 0.2), (1.4, 0.6), (1.0, 1.0), (0.6, 1.4), (0.2, 1.8), and (0.0, 2.0), for the 1-F-D-T model.
Figure <ref> shows the F1 scores of frame, onset, offset, and velocity evaluated on the MAPS validation set in each epoch.
These results indicate that the (1.0, 1.0) pair yields the best score.
It also shows that the training converges faster when α_1st is larger than α_2nd.
Importantly, if we omit the output_1st, which is the case when training with the pair (0.0, 2.0), the training loss did not decrease much.
Therefore, the F1 score stays around 0% and thus cannot be seen in Figure <ref>.
This suggests that it is crucial to use both losses, output_1st and output_2nd in our proposed method.
§ CONCLUSION
In this work, we proposed hFT-Transformer, an automatic piano transcription method that uses a two-level hierarchical frequency-time Transformer architecture.
The first hierarchy consists of a 1-D convolutional block in the time axis, a Transformer encoder and a Transformer decoder in the frequency axis, and the second hierarchy consists of a Transformer encoder in the time axis.
The experiment result based on two well-known piano datasets, MAPS and MAESTRO, revealed that our two-level hierarchical architecture works effectively and outperformed other state-of-the-art methods in F1 score for frame-level and note-level transcription metrics.
For future work, we would like to extend our method to other instruments and multi-instrument settings.
§ ACKNOWLEDGMENTS
We would like to thank Giorgio Fabbro and Stefan Uhlich for their valuable comments while preparing this manuscript.
We are grateful to Kin Wai Cheuk for his dedicated support in preparing our github repository.
|
http://arxiv.org/abs/2307.06196v1 | 20230712143624 | A Volume-Renormalized Mass for Asymptotically Hyperbolic Manifolds | [
"Mattias Dahl",
"Klaus Kroencke",
"Stephen McCormick"
] | math.DG | [
"math.DG",
"gr-qc",
"hep-th",
"53C21, 53C25, 53E20"
] |
We define a geometric quantity for asymptotically hyperbolic manifolds, which we call the volume-renormalized mass. It is essentially a linear combination of a renormalization of the volume and the standard ADM mass integral.
We show that the volume-renormalized mass is well-defined and diffeomorphism invariant under weaker fall-off conditions than required to ensure the renormalized volume and ADM mass integral are well-defined separately. We prove several positivity results for this mass, and we use it to define a renormalized Einstein–Hilbert action and an expander entropy in the context of Ricci flow on asymptotically hyperbolic manifolds. Furthermore, we show that the expander entropy is monotonically nondecreasing under the Ricci flow, critical points are Poincaré–Einstein metrics, and local maximizers of the entropy are local minimizers of the volume-renormalized mass.
COVID-19 incidence in the Republic of Ireland: A case study for network-based time series models
Stephanie Armbruster^*
Department of Biostatistics, Harvard University, 655 Huntington Avenue, Boston, MA 02115, USA; ^*Corresponding author: [email protected]
Gesine Reinert
Department of Statistics, University of Oxford, 24-29 St Giles, Oxford OX1 3LB, UK
August 12, 2023
================================================================================================================================================================================================================================================================================
§ INTRODUCTION
We introduce a new mass quantity for asymptotically hyperbolic (AH) manifolds that are asymptotically Poincaré–Einstein (APE). There are different notions of “asymptotically hyperbolic” throughout the literature, however here we mean that the manifolds are conformally compact with sectional curvature converging to -1 towards the conformal boundary. Recall that an AH Einstein manifold is called Poincaré–Einstein (PE). With this in mind, we say an AH manifold is APE if +(n-1)g=O(e^-δ r), for some δ>n-1/2, where n is the dimension of the AH manifold.
The mass we define here is not directly related to the asymptotically hyperbolic mass in the context of mathematical general relativity, which describes the mass of an isolated system with a negative cosmological constant, see <cit.>. Rather, the quantity studied here serves as a tool for studying scalar curvature problems and Ricci flow on AH manifolds. In a sense, it plays a similar role as the ADM mass plays in the study of geometric problems on asymptotically Euclidean manifolds. Indeed, we show that this quantity does behave like a mass should in many regards, including exhibiting some positivity properties. We also use this mass to define a renormalized Einstein–Hilbert action, and an expander entropy in the context of Ricci flow on AH manifolds.
The mass definition essentially follows from the observation that, to leading order, the ADM expression for the mass of an asymptotically Euclidean manifold coincides with (a multiple of) the renormalized volume of the manifold. In particular, a linear combination of these quantities is well-defined even in cases where neither is well-defined independently.
Given two AH manifolds (M^n,g) and (M^n,) with diffeomorphic conformal infinities,
the volume-renormalized mass (of g with respect to ), which we introduce here, is essentially defined as
𝔪_VR,(g) =∫_∂ M (div_(φ_*g)-d_(φ_*g))(ν)dA
+2(n-1)(∫_M _g-∫_M_),
where φ is a diffeomorphism between neighborhoods of the conformal infinities such that φ_*g- decays suitably, and the integral over ∂ M should be understood as an appropriate limit. Formally, since none of the terms in (<ref>) will be finite in general under our relatively weak decay assumptions, we define the mass to be the bulk integral one obtains via the divergence theorem applied to the boundary integral at infinity.
When the asymptotic fall-off is so fast that the boundary integral vanishes, the volume-renormalized mass is simply proportional to the renormalized volume. In this sense, we can also view the quantity as a generalization of the renormalized volume. With this in mind, we remark that a connection between local minimizers of the renormalized volume and Ricci flow has recently been observed by Hu, Ji, and Shi <cit.>. Furthermore, positivity of the renormalized volume has been proven for metrics on ℝ^3 asymptotic to the standard hyperbolic metric by Brendle and Chodosh <cit.>, which can be viewed as a positive mass theorem for the volume-renormalized mass under strong decay conditions.
Let us now summarize the main results of this paper.
Let (M^n,g) and (M^n,) be APE manifolds with isometric conformal boundaries that both satisfy +n(n-1)∈ L^1.
Then, 𝔪_VR,(g) is well defined and finite, provided that φ_*g-ĝ=O(e^-δ r) for some δ>n-1/2.
For a more detailed statement, see Theorem <ref>. The theorem follows from the observation that the volume-renormalized mass contributes to a renormalized version of the Einstein–Hilbert action for AH manifolds.
Under the assumptions of the theorem, the existence of a diffeomorphism φ with φ_*g-ĝ=O(e^-δ r) just follows from the celebrated Fefferman–Graham expansion for AH Einstein metrics at the conformal boundary, see Proposition <ref> below.
A priori, the definition of the volume-renormalized mass depends on the choice of φ. From a physical perspective however, a mass should be a coordinate-invariant object and therefore not depend on the choice of diffeomorphism φ. We are indeed able to show that this is the case for the volume-renormalized mass, provided that an additional condition holds:
Let (M^n,g) and (M^n,) be APE manifolds with isometric conformal boundaries that both satisfy +n(n-1)∈ L^1.
If the conformal boundaries are proper, 𝔪_VR,(g) does not depend on the choice of φ.
See also Theorem <ref> for a more precise statement.
In this context, we call a conformal class proper, if it is the conformal boundary of a PE manifold (M,g) such that every isometry of the conformal boundary extends to an isometry of (M,g).
It is well known, that the conformal class of the round sphere is proper as hyperbolic space does the desired job.
It has been shown recently (see <cit.>) that every conformal class of a smooth metric on a compact manifold is the conformal class of an (in general incomplete) PE manifold. It seems reasonable to believe that every such conformal class is actually proper, but we don't have a proof of this conjecture at the moment.
We are also going to show that the mass satisfies an additivity property, see Proposition <ref>. As a consequence, changing reference manifold (M^n,) causes the functional g↦𝔪_VR,(g) only to change by a constant. Summarizing, we have for every proper conformal boundary a natural mass functional which is diffeomorphism invariant and well-defined up to a constant.
As another main result, we prove a positive mass theorem for AH metrics on ^3:
Let g be a complete APE metric on ^3 such that g-g_hyp=O(e^-δ r) for some δ>1. Assume furthermore that _g+6 is nonnegative and integrable. Then 𝔪_VR,g_hyp(g) is nonnegative and vanishes if and only if g is isometric to g_hyp.
The proof of this theorem uses the aforementioned positivity result for the renormalized volume by Brendle and Chodosh <cit.>, combined with a delicate density argument that also utilizes the solution of the Yamabe problem in the AH setting.
Finally, we use the volume-renormalized mass to define an analogue of the expander entropy due to <cit.> for AH manifolds: We study a functional g↦μ_AH,(g) (where is an AH reference metric) which is monotone under the Ricci flow
∂_tg=-2_g-2(n-1)g
and whose critical points are PE. This part of the article is inspired by recent work of Deruelle and Ozuch <cit.> who use the ADM mass to define a version of Perelman's λ-functional for asymptotically locally Euclidean (ALE) manifolds which is monotone under the Ricci flow in its standard form on ALE manifolds. However, their functional is a priori only well-defined in a small neighborhood of a Ricci-flat manifold and seems in general fail to be well-defined for every ALE metric.
In contrast, our version of the expander entropy is well-defined for every APE manifold.
Our final main result is a local positive mass theorem which is as follows.
Let (M,) be a complete PE manifold. Then the following two assertions are equivalent:
(i) is a local maximiser of μ_AH,
(ii) is a local minimum of 𝔪_VR, among all metrics with +n(n-1) being nonnegative and integrable.
Furthermore, we have:
(iii) If (M,) is linearly stable and integrable, then (i) and (ii) hold.
(iv) If (i) and (ii) hold, then (M,) is scalar curvature rigid under a volume constraint.
This theorem is later divided into three separate statements, see Theorem <ref>, Theorem <ref> and Corollary <ref> in the text below.
There exist analogues of (parts of) this theorem for ALE Ricci-flat manifolds which involve the ADM mass and ALE variants of Perelman's λ-functional, see <cit.> and <cit.>. In fact, <cit.> establishes an analogue of the equivalence (i)⇔ (ii) in the ALE setting which was conjectured Ilmanen earlier, see <cit.>. In this sense Theorem <ref> solves an AH version of Ilmenan's conjecture.
Let us now summarize the structure of the paper. In Section <ref>, we explain some notation and make some of the definitions of this introduction more precise.
In Section <ref>, we prove Theorem <ref> as well as Theorem <ref> and discuss the renormalized Einstein–Hilbert action and its variation.
In Section <ref> we prove a 2-dimensional positive mass theorem (Theorem <ref>) analogous to the 2-dimensional positive mass theorem for the ADM mass of asymptotically conical surface.
We also establish a conformal positive mass theorem (Theorem <ref>) which is a major ingredient in proving Theorem <ref> whose proof concludes the section.
In Section <ref>, we define the expander entropy μ_AH, precisely and compute its first and second variation.
Then finally in Section <ref>, we establish Theorem <ref>.
§.§ Acknowledgements
Thanks to Eric Bahuaud, Piotr Chruściel, Marc Herzlich, Jan Metzger and Eric Woolgar for interesting and inspiring discussions.
§ NOTATION AND MAIN DEFINITIONS
Throughout the article, we use n to denote the dimension of the manifold on which we work, which is assumed to be at least 3 everywhere unless otherwise specified. It should also be remarked that we use the convention that the Laplacian is given by Δ=-div∘ d=-∘∇^2.
Let N be a compact manifold with compact boundary ∂ N. Let ρ : N → [0,∞) be a smooth boundary defining function, which means that ρ^-1(0) = ∂ N and dρ|_∂ N≠ 0. Let M = N ∖∂ N. We say that a Riemannian metric g on M is conformally compact of class C^k,α, if there is a C^k,α-Riemannian metric b on N
so that g = ρ^-2 b. In this case,
the sectional curvatures of g tend to -|dρ|^2_h at ∂ N. If |dρ|^2_h = 1 so that all sectional curvatures tend to -1 at ∂ N we say that (M,g) is asymptotically hyperbolic, or simply AH. The Riemannian manifold (N,b) is called the conformal background of (M,g). With σ:=b|_∂ N, we call ( N,[σ]) the conformal boundary of (M,g).
We also call a manifold (M,g) conformally compact if it is the complement of a compact set of a manifold that is conformally compact in the above sense.
Throughout, we will often make use of a radial function r defined by ρ =e^-r rather than working directly with the boundary defining function ρ. We will work in weighted Hölder spaces C^k,α_δ(M)=e^-δ rC^k,α(M), where C^k,α(M) denotes the standard Hölder spaces and δ∈ℝ prescribes asymptotic decay. Namely, u∈ C^k,α_δ(M) satisfies u=O(e^-δ r). We remark that our convention for δ is that used in <cit.>, which we refer to for several analytical results, and has the opposite sign to that of <cit.>, which is common in the mathematical general relativity literature. The weighted Hölder space C^k,α_δ(M) is equipped with the norm
u_k,α,δ = e^δ ru_C^k,α(M),
where ·_C^k,α(M) denotes the usual Hölder norm on M. To define weighted Hölder spaces of sections of bundles over a noncompact manifold, some care must be taken. Here we follow Lee <cit.> and refer the reader to Section 3 therein for details.
Fix a reference asymptotically hyperbolic Riemannian manifold (M, ), with conformal background N.
We define the space of Riemannian metrics on M asymptotic to as
ℛ^k,α_δ(M,)
={ g | g-∈ C^k,α_δ(S^2T^*M),g>0 },
where S^2T^*M is the bundle of symmetric bilinear forms on M.
Let (M,g) be an AH manifold with conformal background N. We say that (M,g) is asymptotic to (M ,) of order δ>0 with respect to φ, if we have
bounded and closed sets K⊂ M, K⊂M and a C^k+1,α-diffeomorphism φ: N ∖ K→N∖K
of manifolds with boundary such that
φ_*g ∈ℛ^k,α_δ(M∖K, ).
With , K,K and φ as in Definition <ref>, we also define
ℛ^k,α_δ(M,φ,)={ g | g∈ C^k,α(S^2T^*K), φ_*g ∈ℛ^k,α_δ(M∖K, ),g>0 }
Given an AH manifold (M,g) asymptotic to (M ,) with diffeomorphism φ, we now choose the boundary defining functions on N and N so that ρ = ρ∘φ
and r = r∘φ on N∖ K. Define the sets
B_R={x∈M|r(x)<R }⊂ M, B_R={x∈M|r(x)=R }⊂M,
and
B_R={x∈M|r(x)<R}⊂M, B_R={x∈M|r(x)=R}⊂M.
Note that φ( B_R)=B_R for R sufficiently large.
Now we define the quantities
^φ_ ADM,g(g,R)
= ∫_B_R(div_(φ_*g)-d_(φ_*g))(ν_)_g,
RV^φ_g(g,R)
= ∫_B_R_g-∫_B_R_g,
where ν_ is the outward unit normal to B in (M,).
Let (M,) be asymptotically hyperbolic.
We define the volume renormalized mass 𝔪^φ_VR,(g) of g with respect to and φ as
𝔪^φ_VR,(g)=
lim_R →∞( ^φ_ ADM,g(g,R)+2(n-1)RV^φ_g(g,R) ).
Theorem <ref> in the following section demonstrates that this quantity is well-defined under the assumptions that
φ_*g ∈ℛ^k,α_δ(M∖K, ) for some δ>n-1/2, where is APE (in the sense of Definition <ref> below)
and the scalar curvature satisfies _+n(n-1)∈ L^1(M). For the moment, we include the superscript φ to denote the dependence on the choice of diffeomorphism, however we demonstrate in the following section that the volume-renormalized mass is in fact invariant under suitably regular diffeomorphisms and a natural assumption for the conformal boundary. Later, we will omit the reference to φ for this reason.
From the work of Fefferman and Graham <cit.>, it is well-known that if an AH manifold (M,g) is PE, it has a very particular expansion at the conformal boundary. More precisely, given a boundary defining function ρ and the corresponding conformal background b, we can write it as
g=ρ^-2b=ρ^-2(dρ^2+σ_ρ),
where σ_ρ is a family of metrics on ∂ N which, if n is even
admits the asymptotic expansion
σ_ρ=σ_0+ρ^2σ_2+ρ^3σ_3+… +ρ^n-2σ_n-2+ρ^n-1σ_n-1+O(ρ^n)
and if n is odd, admits the asymptotic expansion
σ_ρ =σ_0+ρ^2σ_2+ρ^3σ_3+… +ρ^n-3σ_n-3
+ρ^n-1(σ_n-1+log(ρ)σ̃_n-1)+O(ρ^nlog(ρ)).
Here, σ_0=b|_∂ N and for 2≤ i≤ n-2, the tensors σ_i are uniquely determined by σ_0. In the odd case, the tensor σ̃_n-1 is also determined by σ_0. Together with the first undetermined term σ_n-1, the metric σ_0 determines all the other terms of the asymptotic expansion.
We say an AH manifold (M,g) of class C^k,α, k≥ 2 is asymptotically Poincaré–Einstein (APE) of order δ>0, if |_g+(n-1)g|_g∈ C^k-2,α_δ(M).
Let (M,g) and (M,g) be C^k,α-asymptotically hyperbolic manifolds with isometric conformal boundaries. Fix a weight δ∈ (0,n-1) satisfying δ≤ k+α.
Then, if both manifolds are APE of order δ, (M,g) is asymptotic to (M,g) of order δ.
This follows almost directly from the Fefferman–Graham expansion discussed above.
Fix a boundary defining function ρ and write g as in (<ref>)
The family σ_ρ is now a C^k,α-family of metrics on ∂ N with σ_ρ|_ρ=0=σ_0.
By lowering the regularity, we may assume that δ=k+α. Using the APE condition, the arguments of Bahuaud, Mazzeo and Woolgar <cit.>, then directly show that
σ_ρ=σ_0+ρ^2σ_2+… +ρ^kσ_k+O(ρ^k+α),
where the tensor fields σ_2, …, σ_k are uniquely determined by σ_0.
Now fix a boundary defining function ρ on M such that σ_0:=ρ^2|_∂N is isometric to σ_0. Let φ:N∖ K→N∖K a diffeomorphism such that ρ∘φ=ρ and (φ|_∂ N)^*σ_0=σ_0. Repeating the above arguments, we get
φ^*=ρ^-2(dρ^2+σ_ρ)
with
σ_ρ=σ_0+ρ^2σ_2+… +ρ^kσ_k+O(ρ^k+α),
so that φ^*-g∈ O(ρ^k+α), or equivalently, φ_*g-∈ O(ρ^k+α), as desired.
We are particularly interested in the case where δ>n-1/2 and k+α≥δ, due to Proposition <ref> below.
From now on, we say an AH manifold (M,g) is asymptotically Poincaré–Einstein (APE) (without reference to order or regularity) if it is
of order δ>0 for some δ>n-1/2 and
of class C^k,α, where k≥ 2 and k+α≥δ.
A similar, but stronger notion of APE manifolds for a different purpose is given in <cit.>, where they use the weight δ=n.
We denote the Banach manifold of all APE metrics on M by
ℛ^k,α_δ(M).
By Proposition <ref>, it is for a given reference APE manifold (M,) equal to the union
⋃_φℛ^k,α_δ(M,φ,)
over all diffeomorphisms φ which are as in Definition <ref>.
§ WELL-DEFINEDNESS AND COORDINATE INVARIANCE OF THE MASS
§.§ A renormalized Einstein–Hilbert action
We utilize an AH version of the Einstein–Hilbert action to establish well-definedness of the volume-renormalized mass as follows.
Let (M,) be an APE manifold with
_+n(n-1)∈ L^1. Then for (M,g) asymptotic to (M,) of order δ>n-1/2, the limit
S^φ_(g) = lim_R →∞(
∫_B_R( _g + n(n-1) ) _g
-^φ_ ADM,g(g,R)
- 2(n-1)RV^φ_g(g,R)
).
is well-defined and finite, where φ is the diffeomorphism from Definition <ref>. In particular, ^φ_ VR,g(g) is well-defined and finite if _g + n(n-1)∈ L^1(M).
We call the functional g↦ S^φ_(g) the
renormalized Einstein–Hilbert action.
Fix some large R_0 so that K⊂ B_R_0 and for R>R_0 we write A_R=B_R∖ B_R_0 and A_R=ϕ(A_R). This allows us to work with φ_* g on A_R.
We now note that ^φ_ ADM,g(g,R) can be expressed via the divergence theorem in terms of the linearization of scalar curvature, as
^φ_ ADM,g(g,R)
=
∫_B_R(div_(div_(φ_*g))+Δ_(_(φ_*g))) _
=
∫_A_R(D_[h]+⟨ h,_ ⟩) _+C,
where h=φ_*g- and C is the finite contribution from the integral over B_R_0. We will use C throughout the proof to denote such a finite term independent of R, but the exact quantity may vary line to line.
By using a Taylor expansion of the scalar curvature we can write this as
^φ_ ADM,g(g,R)
= ∫_A_R_φ_*g-_
+ ⟨ h,_⟩ + o(e^-(n-1)r) _+C,
where we have used the fact |h|_^2=o(e^-(n-1)r), since δ>n-1/2. Similarly, we Taylor expand the volume form
_ϕ_*g
= _ + 1/2_(h) _
+ o(e^-(n-1)r)_
so that we can write
RV^φ_g(g,R)
= ∫_A_R1/2_(h) _
+ o(e^-(n-1)r) _g + C.
Now we bring everything together to obtain
S^φ_(g)
=
lim_R →∞(
∫_A_R( _ -_φ_*g) -⟨ h,_ +(n-1)⟩ _
+∫_A_R(_φ_*g+n(n-1)) _φ_*g) + C
=
lim_R →∞(
∫_A_R( _ +n(n-1) ) -⟨ h,_+(n-1)⟩ _
+1/2∫_A_R(_φ_*g+n(n-1) )_(h) _) + C,
where we have again used the Taylor expansion of the volume form. Since h, _g+n(n-1), _+(n-1) are of order o(e^-(n-1)r/2), and _g+n(n-1) ∈ L^1, we are done.
Due to Definition <ref> and Proposition <ref>, we immediately obtain:
Let (M,g) and (M,g) be APE manifolds with isometric conformal boundaries and assume that _+n(n-1)∈ L^1. Then, S^φ_(g) is well defined and finite, where φ is a diffeomorphism in the sense of Definition <ref>. If furthermore _g+n(n-1)∈ L^1, then ^φ_ VR,g(g) is well-defined and finite.
We will from now on frequently use Corollary <ref> without giving an explicit reference to it. Furthermore, for the sake of convenience, we will say that an APE manifold has integrable normalized scalar curvature if +n(n-1)∈ L^1.
Although we define the volume renormalized mass with the help of large coordinate balls and their boundaries, this is simply for convenience. The proof of Theorem <ref> demonstrates that the volume-renormalized mass is independent of the choice of exhaustion of M. In particular, it is independent of the pair of boundary defining functions on M and M, as long as these are φ-compatible.
Note that under the conditions of Theorem <ref>,
^φ_ VR,g(g)=±∞ if ±(_g + n(n-1))≥0 and _g + n(n-1)∉ L^1(M). In particular, this is independent of the chosen diffeomorphism φ.
The asymptotically hyperbolic mass <cit.> requires faster falloff than in Theorem <ref> in order to be defined. More precisely,
the metric needs to be asymptotic to the reference metric at a rate of O(e^-δ r) with δ>n/2 and that the scalar curvature satisfies e^r(+n(n-1))∈ L^1.
The following proposition demonstrates that the volume-renormalized mass exhibits a kind of additivity property.
Let (M,g),(M,g),(M,) be APE manifolds
with isometric conformal boundaries and integrable normalized scalar curvature. Then we have
𝔪^φ^-1∘φ_VR,g(g)
=
𝔪^φ_VR,(g)
-𝔪^φ_VR,(g).
where φ:N∖ K→N∖K and φ:N∖K→N∖K, respectively, are the associated diffeomorphisms in the sense of Definition <ref>.
One can quickly see that the relative renormalized volumes satisfy this property so we need only to focus on the ADM boundary terms at infinity. That is, we need to look at the difference ^φ_ ADM,g(g,R)-^φ_ ADM,g(g,R) in the limit R→∞, which is given by
∫_B_R(div_(φ_*g)-d_(φ_*g)-div_(φ_* g )+d_(φ_*g))(ν_)_g
=
∫_B_R(div_(φ_*g-φ_* g)-d_(φ_*g-φ_*g))(ν_)_g
=
∫_B_R(div_φ^*((φ^-1∘φ)_*g-g)-d_φ^*((φ^-1∘φ)_*g-g))(ν_φ^*)_φ^*g.
We consider the divergence term separately first, as both terms can be treated almost identically. Using our decay conditions and the fact that the difference in Christoffel symbols for (φ^-1∘φ)_*g and g is o(r^-(n-1)/2), we have
div_φ^*((φ^-1∘φ)_*g-g) =div_g((φ^-1∘φ)_*g-g)+o(r^-(n-1))
=div_g((φ^-1∘φ)_*g)+o(r^-(n-1)).
By a similar argument we have
d_φ^*((φ^-1∘φ)_*g-g)=d_g((φ^-1∘φ)_*g)+o(r^-(n-1)),
and then using the asymptotic behaviour of the volume form _φ^*g and unit normal ν_φ^*, we can write the boundary integral on B_R as
∫_B_R(div_g((φ^-1∘φ)_*g)-d_g((φ^-1∘φ)_*g))(ν_g)_g+o(1)
=
^φ^-1∘φ_ ADM,g(g,R)+o(1).
The result follows.
An additivity property of this nature is common for mass-type quantities such as what we study here. For a recent detailed treatment the reader is referred to <cit.>.
Along the lines of Proposition <ref>, one also proves
Let (M,g),(M,g),(M,) be APE manifolds
with isometric conformal boundaries and assume that (M,g) and (M,) have integrable normalized scalar curvature.
Then the renormalized Einstein–Hilbert action satisfies the additivity property
S^φ^-1∘φ_g(g)
= S^φ_(g)
+𝔪^φ_VR,(g),
where φ:N∖ K→N∖K and φ:N∖K→N∖K, respectively, are the associated diffeomorphisms in the sense of Definition <ref>.
* Given two APE manifolds (M,) and (M, g) with
isometric conformal boundaries and integrable normalized scalar curvature, Proposition <ref> says that 𝔪^φ_VR,(g)=-𝔪^φ^-1_VR,g(). That is, for any positivity statement to hold true there must be a special choice of reference metric. It is natural to consider reference metrics that are Einstein, which by a local existence result of Gursky and Székelyhidi <cit.>, can always be found at least in a neighborhood of the conformal boundary. However, due to the renormalized volume term, a reference metric that is only defined in a neighborhood of the conformal boundary will not be optimal. An interesting related question is whether for an arbitrary fixed APE reference manifold (M,) the infimum of _ VR, is realised, or even finite, over the set of complete APE manifolds (M,g) with _g+n(n+1) being nonnegative and integrable whose conformal boundary is isometric to the one of (M,)
* Proposition <ref> indicates that the specification of the reference metric , with respect to which the volume-renormalized mass is defined, simply determines the ground state for the mass. That is, changing the reference metric simply changes the mass by a constant. In particular, the variational properties of the mass are independent of the choice of reference metric.
Under the hypotheses of Theorem <ref> that guarantee S^φ_(g) is well-defined, its first variation is given by
D_gS^φ_[h]=
-∫_M ⟨_ g-1/2_g· g-1/2(n-1)(n-2)g,h⟩_g _g.
In particular, S_^φ is an analytic functional on the space ℛ^k,α_δ(M,φ,) for any δ>n-1/2.
This is a straightforward computation. We begin by computing the contribution from the scalar curvature, to obtain
∫_B_RD_g[h]_g=∫_B_Rdiv_g(div_g(h))+Δ_g( h)-⟨ h, _ g ⟩_g_g.
Then using the divergence theorem and the fact that g∈ℛ_δ(M, ) for δ>-(n-1)/2, we have for sufficiently large R,
∫_B_Rdiv_g(div_g(h))+Δ_g(_g h)_g
=
∫_ B_R( div_g(h) - d_g(h) )(ν)_g
=
∫_ B_R( div_ϕ^*(h) - d_ϕ^*(h) )(ν)_ϕ^*+o(1)
=
∫_B_R( div_(φ_*h) - d_(φ_*h) )(ν)_+o(1),
which is exactly the first variation of ^φ_ ADM,g(g,R) up to the o(1) terms. One can quickly check that the remaining terms combine to yield (<ref>).
(i) If n=2, the functional g↦ S^φ_(g) is constant on ℛ^k,α_δ(M,φ,) for δ>1/2.
(ii) If n≥ 3, g is a critical point of S^φ_ if and only if _ g=-(n-1)g.
Part (ii) of Corollary <ref> states that the action S^φ_ is an appropriate version of the Einstein–Hilbert action for AH manifolds. We briefly digress to compare this action to the approach that the AdS/CFT literature uses to renormalize the Einstein–Hilbert action.
We first note that it is well-known that in the case of AH manifolds one should add a boundary term at infinity to the Einstein–Hilbert action in order to ensure that it gives the correct equations of motions. Specifically, the (unrenormalized) Einstein–Hilbert action is given by <cit.>
I(g)=∫_M _g+(n-1)(n-2) _g+2lim_R→∞∫_ B_R H dA_g.
Although this action does generate the correct equations of motion – that is, formally the critical points of the action are PE – the action is not finite in general. A standard technique to obtain something finite from the action is to restrict to PE metrics (that is, “on shell”) and simply subtract all of the divergent terms in the Fefferman–Graham expansion, which are entirely determined by the conformal boundary (see, for example, <cit.>). In the physics literature this is often referred to as holographic renormalization, referring to the AdS/CFT correspondence.
An older approach to this renormalization, which is more closely related to the renormalized Einstein–Hilbert action presented here, is to subtract the value of the action evaluated on a fixed reference manifold (see, for example, <cit.>). That is, one can obtain a finite value by essentially subtracting I() from the action, where is a fixed reference PE metric and cancelling out the dominant terms via an appropriate limiting process. Since we obtain the same equations of motion, one should expect that S(g) differs from this renormalized action by at most a constant.
To make this comparison, note that the integral of the difference of the total mean curvature with respect to each metric is essentially the Brown–York quasi-local mass. Then one can check that the difference S_(g)-(I(g)-I()) is the difference between the Brown–York term and ADM mass expression coming from S(g). Now recall that in the asymptotically Euclidean case, this difference is identically zero. That is, the Brown–York mass converges to the ADM mass in an appropriate limit. It would be therefore interesting to check if this remains the case here, so that S_(g) agrees with (I(g)-I()) (regularized via a suitable limiting process), or at most differs by a term independent of g as expected. However, this lies outside of the scope of the current work.
§.§ Diffeomorphism invariance
In this subsection, we demonstrate that the volume-renormalized mass is indeed a diffeomorphism-invariant quantity.
A first indication is already given by the following lemma, which requires slightly more regularity than we have required previously. For the moment, we assume that the regularity of the involved manifolds, C^k,α, satisfies k≥3 in addition to k+α≥δ.
Let (M,g) and (M,) be APE manifolds with isometric conformal boundaries and assume that (M,) has integrable normalized scalar curvature. Then for all X∈ C^k,α_δ(TM), we have
D_gS^φ_(ℒ_Xg)=0,
where φ:M∖ K→M∖K is a diffeomorphism in the sense of Definition <ref>.
Since ℒ_X g∈ C^k-1,α_δ, Proposition <ref> gives us
D_gS_^φ(ℒ_X g)
=
-∫_M ⟨_ g-1/2_g· g-1/2(n-1)(n-2)g,
ℒ_X g⟩_g _g.
Integrating by parts we get a boundary term at infinity which vanishes due to the decay conditions. This leaves us with
D_gS_^φ(ℒ_X g)
=
-2∫_M div( _ g-1/2_g· g)(X) _g,
which is identically zero by the contracted Bianchi identity.
An immediate consequence is the following result.
Let (M,g), (M,) and φ be as in Lemma <ref>. Additionally let X∈ C^k,α_δ(TM) and ψ_t be the group of diffeomorphisms generated by X. Then,
S^φ_ĝ((ψ_t)_*g)=S^φ_(g)
for all t∈. If in addition _g+n(n-1)∈ L^1, we obtain
^φ_ VR,((ψ_t)_*g)=^φ_ VR,(g)
for all t∈.
This corollary asserts that the mass does not change if we modify the C^k,α-diffeomorphism φ: N ∖ K→N∖K slightly by a diffeomorphism on N, generated by a vector field with sufficiently fast falloff. This is does not give a completely satisfying answer to the question of diffeomorphism invariance. For example, it excludes Lorentz boosts on hyperbolic space, which are generated by vector fields that do not decay towards the conformal boundary.
From now on, we may again allow C^k,α, with k≥2 and k+α≥δ.
The following lemma is entirely straightforward, however we state it for completeness.
Let (M,g) and (M,) be APE manifolds with integrable normalized scalar curvature and isometric conformal boundaries
and let ψ:M→ M be a diffeomorphism. Then
_ VR,^φ(g)=_^φ∘ψ^-1(ψ_*g),
where φ is the usual defining diffeomorphism as in Definition <ref>. Similarly, if χ:M→M is a diffeomorphism,
_ VR,^φ(g)=_χ_ *^χ∘φ(g).
The renormalized volume clearly is unchanged by a diffeomorphism and the ADM term is unchanged because (φ∘ψ^-1)_*(ψ_* g)=φ_* g. The other argument is analogous.
In the following, we are going to prove a much stronger type of diffeomorphism invariance under a natural condition.
Let (M,g) be a (possibly incomplete) PE manifold with conformal boundary (∂N,[σ]). We call (M,g) proper if for every conformal isometry ψ_∂∈Iso(∂N,[σ]), extends to a C^k+1,α-diffeomorphism ψ:N→N
that restricts to an isometry of (M,g). We call a conformal class (∂N,[σ]) proper if it is the conformal boundary of a proper PE manifold.
The key technical step for this diffeomorphism invariance is provided by the following lemma.
Let (M,g) be a proper PE manifold. Then,
for every pair of open neighborhoods U,V of ∂N in N and
every C^k+1,α-diffeomorphism φ:U→ V with φ_*g∈ℛ^k,α_δ(V,), we have that
m^φ_ VR,()=0.
Because φ restricts to a C^k+1,α-diffeomorphism on the boundary and φ_*g-g∈ C^k,α_δ, we get that φ|_∂N is a conformal isometry on the boundary. Because (M,g) is proper,
there exists a C^k+1,α diffeomorphism ψ on N with ψ|_∂N=φ|_∂N which restricts to an isometry of (M,). The diffeomorphism χ=φ^-1∘ψ:ψ(V)→ U satisfies
χ_*g-g∈ C^k,α_δ and
χ|_∂N=id_∂N. Furthermore,
by Lemma <ref> and since ψ is an isometry we have
m^φ_ VR,()
=m^φ∘ψ^-1_ VR,(ψ_*)=m^χ^-1_ VR,().
Let us extend ψ to a C^k+1,α-diffeomorphism on M, again denoted by ψ. By definition of the mass, this does not change its value and by Lemma <ref>, we have
m^χ^-1_ VR,()=m^Id_ VR,(χ^*).
It therefore remains to show that m^Id_ VR,(χ^*)=0.
By definition of the mass and diffeomorphism invariance of the volume, changing the diffeomorphism χ inside a large ball B_R does not change the mass. By deforming χ on a bounded subset, we may therefore assume that χ=Id inside such a ball. Choosing R sufficiently large will make the difference χ-id arbitrarily small (measured with respect to the conformal background h.
Provided that R is chosen large enough, we therefore find a C^k+1,α-vector field X on N such that the flow χ^X_t generated by X satisfies χ^X_1=χ. Note that X vanishes on ∂N and B_R.
For convenience, we abbreviate χ_t=χ^X_t. The family t↦_t=χ_t^* then conncets and χ^*.
By Taylor expansion in time,
χ^*g-g
=∫_0^1d/dtg_tdt
=∫_0^1d/dt (χ_t^*g)dt
=∫_0^1χ_t^*(ℒ_Xg)dt.
Our goal is now to determine the weighted regularity of d/dtg_t.
For this purpose, let ρ be a boundary defining function for ∂N and denote by y the coordinates on ∂N. By Taylor expansion at ρ=0 and because X vanishes on the boundary, we get that
X=aρ·∂_ρ+ ρ· Y(ρ)+O(ρ^2),
where Y(ρ) is a ρ-dependant family of vector fields on ∂N and a is a function on the boundary. With respect to the conformal background metric b, we have
|∇^(l)_X|_=ρ^l-1|∇^(l)_X|_b.
To compare covariant derivatives of and h, we note that
Γ()_ij^l-Γ(b)_ij^l
=
-2ρ^-1(∂_iρ·δ_j^l+∂_jρ·δ_i^l-∂_mρ·b^lmb_ij)
=
∇^(1)´_hρ*ρ^-1.
An induction argument then shows that
∇^(l)_X-∇^(l)_bX
=
∑_l_0+… +l_r+r=l(∇^(l_1+1)_hρ/ρ*… *
∇^(l_r+1)_hρ/ρ)*∇^(l_0)_hX.
From this, we find
|∇^(l)_X|_=ρ^l-1|∇^(l)_X|_b≤ C∑_m=0^lρ^m-1|∇^(m)_bX|_b,
which implies
X_C^k+1,α()≤∇_b^(1) X_C^k,α(b)+ρ^-1X_C^0(b).
Since X is C^k+1,α-regular and vanishes on the boundary, we have |X|_b=O(ρ) so that X_C^k+1,α()<∞. Therefore, the family of metrics t↦_t=χ_t^* is a smooth family in ℛ^k,α_0(M, ). In particular, the C^k,α-norms of the metrics _t, t∈ [0,1] are uniformly equivalent and d/dt_t=χ_t^*(ℒ_Xg)∈ C^k,α(χ_t^*)=C^k,α().
In the following we are improving the weight of this regularity.
By (<ref>), we get constants a_-<a_+, depending on the bounds of the function a such that
e^a_-tρ(x)≤ρ(χ_t(x))≤ e^a_+tρ(x).
for all x∈ P and t∈.
We therefore find uniform constants C_1,C_2>0 such that
C_1ρ≤χ_t^*ρ≤ C_2ρ,
for all t∈ [0,1].
Thus for any weight δ∈ and t∈[0,1], the weighted spaces C^k,α_δ generated by the metrics _t are also uniformly equivalent and we have that h∈ C^k,α_δ(S^2T^*M,) (or equivalently, χ_t^*h∈ C^k,α_δ(S^2T^*M,χ_t^*)) if and only if χ_t^*h∈ C^k,α_δ(S^2T^*M,).
Because χ^*g-g∈ C^k,α_δ, the expansion (<ref>) therefore necessarily implies that d/dt_t∈ C^k,α_δ(S^2T^*M,) for t∈ [0,1].
Thus, _t is a smooth family of isometric PE metrics in C^k,α_δ(S^2_+T^*M,) connecting and χ^*.
The function t↦ S^Id_g(χ_t^*g) is constant, as it is evaluated along a family of critical metrics. In addition,
S^Id_g(χ_t^*g)=-m^Id_ VR,g(χ_t^*g) because all χ_t^*g have constant scalar curvature -n(n-1).
Therefore,
^Id_ VR,(χ^*)=^Id_ VR,()=0,
which was to be proven.
Let (M,g) and (M,) be APE manifolds with integrable normalized scalar curvature and isometric conformal boundaries. Assume that the conformal boundaries are proper. Then, the definition of ^φ_VR,(g) is independent of the choice of C^k+1,α-diffeomorphism φ.
Let φ_1:N∖ K_1→N∖K_1 and φ_2:N∖ K_2→N∖K_2 be two C^k+1,α-diffeomorphisms with the property that
(φ_1)_*g∈ℛ^k,α_δ(M∖K_1,),
(φ_2)_*g∈ℛ^k,α_δ(M∖K_2,).
Our goal is to show that
^φ_1_VR,(g)=^φ_2_VR,(g).
Let (M,g) be a proper PE manifold with the same conformal boundary as (M,g). Let furthermore
χ:N∖K→N∖K be a C^k+1,α-diffeomorphism so that
χ_*∈ℛ^k,α_δ(M∖K,g).
Assume without loss of generality that K⊂K_1∩K_2 so that the diffeomorphisms ψ_i=χ∘φ_i:N∖ K_i→N∖K_i, i=1,2 are defined.
We then get
(ψ_1)_*g-g ∈ C^k,α_δ(S^2T^*(M∖K),g)
(ψ_2)_*g-g ∈ C^k,α_δ(S^2T^*(M∖K),g)
and Proposition <ref> implies that
^ψ_i_VR,g(g)
=^φ_i_VR,(g) + ^χ_VR,g()
for i=1,2. Thus, it suffices to show
^ψ_1_VR,g(g)=^ψ_2_VR,g(g).
For this purpose, observe that we can choose closed and bounded subsets
K_1 and K_2 of N such that we get a diffeomorphism
ψ_2∘ (ψ_1)^-1:N∖K_1
→N∖K_2.
Note that (<ref>) implies
C^k,α_δ(S^2T^*(M∖K_2),(ψ_2)_*g)=C^k,α_δ(S^2T^*(M∖K_2).g)
By (<ref>) and diffeomorphism invariance, we thus get
(ψ_2)_*g - (ψ_2∘ψ_1^-1)_* g ∈
C^k,α_δ(S^2T^*(M∖K_2),(ψ_2)_*g)
=C^k,α_δ(S^2T^*(M∖K_2),g).
Together with (<ref>)
and the triangle inequality, we obtain
(ψ_2∘ψ_1^-1)_* g-g∈
C^k,α_δ(S^2T^*(M∖K_2),g).
We can also extend ψ_2∘ψ_1^-1 to a C^k+1,α-diffeomorphism
θ:M→M. We then have ψ_2=θ∘ψ_1 and
θ_* g-g∈
C^k,α_δ(S^2_+T^*(M∖K_2),g).
Proposition <ref> together with Lemma <ref> implies that
^ψ_2_VR,g(g)
=
^ψ_2_VR,θ* g(g)
- ^Id_VR,θ_*g( g)
=
^ψ_2_VR,θ* g(g)
+^Id_VR,g(θ_* g)
=
^θ∘ψ_1_VR,θ_* g(g)
=
^ψ_1_VR,g(g)
where we used Lemma <ref> for the last equality.
Note that the key arguments are indeed performed in the proof of Lemma <ref>. Interestingly, the proof does not require a choice of preferred coordinate system, which may be contrasted to the proofs of diffeomorphism invariance of the ADM mass, see <cit.>.
Given two PE metrics (M,g) and (M,) with isometric conformal boundaries, the Fefferman–Graham expansion ((<ref>) (<ref>)) implies that we find a diffeomorphism φ between neighborhoods of the conformal infinities such that φ_*g-=O(e^-(n-1)r). Therefore not only ^φ_VR,(g), but the renormalized volume and ADM boundary term are both finite independently. Moreover, the ADM boundary term sees exactly the first undetermined term σ_n-1 in the Fefferman–Graham expansion. If (M,g) is complete, the functional g↦^φ_VR,(g)
is constant along a moduli space of Einstein manifolds by Corollary <ref>, both the renormalized volume and ADM boundary term may vary.
We refer to _VR,(g) a kind of mass, and may imagine the renormalized volume appearing as simply a tool to render the ADM mass finite. However, it could instead be viewed as using the ADM term to generalize the notion of renormalized volume, which in this context is an interesting quantity both from a mathematical perspective and also a physical one. For example, from the perspective of the AdS/CFT correspondence it is sometimes regarded as a measure of holographic complexity and conjectured to be positive under physically motivated assumptions <cit.>. In 3 dimensions, positivity of the renormalized volume of an AH manifold with standard hyperbolic space as the reference was established by Brendle and Chodosh <cit.>, assuming sufficient asymptotic decay that ensure it is finite (see also <cit.>). In fact, Brendle and Chodosh proved a stronger statement than positivity, namely they proved an analogue of the Riemannian Penrose inequality for the renormalized volume. Both from the perspective of mass and renormalized volume, it seems natural to expect the volume-renormalized mass to satisfy a positive mass theorem of some sort, at least when the reference manifold is Einstein. Indeed we establish some special cases of this in the following section.
§ SPECIAL CASES OF THE POSITIVE MASS THEOREM
In this section we consider the volume-renormalized mass 𝔪_VR,(g) in two special cases. First we establish a mass formula for AH surfaces, then we prove a conformal mass theorem. The latter is one of the key tools to establish a positive mass theorem for rotationally-symmetric AH metrics on ^n and for general AH metrics on ^3.
We will from now on implicitly assume that the conformal boundaries of our AH manifolds are proper, so that Theorem <ref> holds.
We may thus from now on drop the dependence of the mass and the renormalized Einstein–Hilbert action on the diffeomorphism in the notation and simply write S_(g) instead of S_^φ(g) and _VR,(g) instead of _VR,^φ(g).
§.§ Two-dimensional positive mass theorem
In Corollary <ref>, we have seen that g↦ S_(g) is constant in dimension two. We will now determine the value of this constant to deduce a positive mass theorem for surfaces.
As a reference surface, we use the hyperbolic plane with an angle defect. That is, (M,g)=(ℝ^2,g_hyp,ω), where the metric g_hyp,ω is given by
g_hyp,ω
= dr^2+sinh^2(r)(ω/2π)^2dθ^2,
θ∈ [0,2π] mod 2π.
in polar coordinates.
Consider an APE surface (M^2,g) which is asymptotic to (M,g) in the sense of Definition <ref>. Then we have
S_g(g) = 4π(χ(M)-2) + 2(2π-ω),
where M is the one-point compactification of M. Under the assumption that _g+2∈ L^1 we conclude the identity
_VR,g(g)+2(2π-ω)
=
∫_M (_g+2)_g
+ 4π(2-χ(M)).
If in addition, _g≥ -2, we find that
_VR,g(g)+2(2π-ω) ≥ 0
and equality holds if and only if (M^2,g) is isometric to (M,g).
By linear interpolation we deform g to a metric g such that φ_* g = g on M∖B_R_0-ϵ for some radius R_0>0. From Corollary <ref> we have
S_g(g)
=
S_g(g)
=lim_R→∞(∫_B_R(_g+2)_g-2R_g(g,R))
=
lim_R→∞(∫_B_R_g_g+2∫_B_R_g)
=
∫_B_R_0_g _g
-∫_B_R_0_g _g.
Now replace the closed set M∖B_R_0 by its one-point compactification at infinity, that is, a closed disk D, and choose a metric g on D∪ (B_R_0∖B_R_0-ϵ) which agrees with g on B_R_0∖B_R_0-ϵ. Then g extends to a metric g_1 on D∪B_R_0≅ S^2, and to a metric g_1 on D∪_φB_R_0≅M. Note that g_1 has a conical singularity with the same angle defect as for g. The Gauss–Bonnet theorem for compact conical surfaces (see, for example, <cit.>) yields
∫_S^2_g_1_g_1
= 4πχ(S^2)+2(ω-2π).
Using the fact that g_1 and g_1 agree on D, we obtain
4π(χ(M)-2)+2(2π-ω)
=
4πχ(M)-(4πχ(S^2)+2(ω-2π))
=
∫_M_g_1_g_1
-
∫_S^2_g_1_g_1
=
∫_D_g_1_g_1
+∫_B_R_g_1_g_1
-∫_D_g_1_g_1
-∫_B_R_g_1_g_1
=
∫_B_R_g_g
-∫_B_R_g_g ,
and combining this with (<ref>) yields (<ref>). The identity (<ref>) is immediate. The inequality (<ref>) follows easily from the assumption _g≥ -2 and the well-known identity χ(M)≤χ(S^2)=2. In case of equality in (<ref>), _g≡ -2, so g is a metric of constant curvature -1, and M is diffeomorphic to S^2, so M is diffeomorphic to ^2. Since g is asymptotic to ĝ and both are of constant curvature, they must agree up to isometry in a neighborhood of infinity. Since both metrics are of constant curvature and they are defined on diffeomorphic manifolds, they must be isometric.
Recall that an asymptotically Euclidean manifold with metric asymptotic to the Euclidean one at a rate of r^-τ requires that first derivatives of the metric fall off at a rate of r^-τ-1. In two dimensions, this means the ADM mass is always zero and the scalar curvature is always integrable. However, if one considers an asymptotically conical surface (M,g) instead, there is an analogue of the ADM mass sometimes called the cone mass, which is essentially the deficit angle of the cone. Specifically, if g is asymptotic to
g_c =
dr^2+r^2(ω/2π)dθ^2, θ∈ [0,2π] mod 2π,
then the mass is defined by 𝔪_c(g)=2(2π-ω). The analogue of the positive mass theorem is that the equality
2(2π-ω) = ∫_M_g _g+4π(2-χ(M))
holds, where M is the one-point compactification of M. The reader is directed to <cit.>, <cit.>, <cit.> for more complete details. This can be directly compared to (<ref>), recalling that the ADM mass boundary integral is zero in the asymptotically flat case.
Interestingly, Chruściel and Herzlich <cit.> discussed the possibility of a positive mass theorem for the two-dimensional AH setting with the same reference metrics (M,)
in the context of the standard mass of an AH manifold, but they did not investigate this issue in further detail. To the best of our knowledge, an asymptotically hyperbolic two-dimensional positive mass theorem analogous to the asymptotically Euclidean one has not otherwise been considered in the literature. However, Theorem <ref> appears to be an appropriate analogue in the asymptotically hyperbolic setting.
§.§ A conformal positive mass theorem
By the work of several authors <cit.>, the Yamabe problem is well-understood in the setting of AH manifolds, and these papers discuss the problem in different boundary regularity settings. The Hölder case to which we will refer to has been discussed by Allen, Isenberg, Lee, and Stavrov in <cit.>.
In the following, we will work on a fixed manifold M^n.
Fix an APE reference metric with constant scalar curvature _=-n(n-1).
Consider the set of metrics
𝒞
={g∈ℛ^k,α_δ(M,)|_g=-n(n-1)},
where δ>n-1/2.
For each g∈ℛ^k,α_δ(M,), there exists a unique function w∈ C^k,α_δ such that g=e^2wg∈𝒞. Moreover, the set 𝒞 is an analytic manifold and
the map
Φ: C^k,α_δ(M)×𝒞 →ℛ^k,α_δ(M,) ,
(w,g) ↦ e^2wg,
is a diffeomorphism of Banach manifolds.
The first assertion follows from the resolution of the Yamabe problem in the AH setting as formulated in <cit.>, up to the precise decay of the comformal factor which we need for the formulation of this proposition.
According to <cit.>, there is a function w ∈ C^k,α_1 so that the metric g=e^2wg has constant scalar curvature -n(n-1).
This function satisfies the equation
-e^2w(n-1)n=e^2w_g=_g+2(n-1)Δ_gw-(n-1)(n-2)|dw|^2_g.
In the following we are going to use that equation to show that w∈ C^k,α_δ. We rewrite the above equation as
2(n-1)(Δ w +nω) =(n-1)(n-2)|dw|^2_g-(_g+n(n-1))
-n(n-1)F(w),
where F(x)=e^2x-1-2x.
By assumption, g-ĝ∈ C^k,α_δ, so that we have _g+n(n-1)∈ C^k-2,α_δ. From w ∈ C^k,α_1 and because F(x)=O(x^2) as x→ 0, we get
(n-1)(n-2)|dw|^2_g-(_g+n(n-1))
-n(n-1)F(w)∈ C^k,α_min{2,δ}.
Now by <cit.>, we know that the operator
Δ +n:C^k,α_η→ C^k-2,α_η
is a isomorphism for all η∈ (-1,n). Thus we obtain w∈ C^k,α_min{2,δ}.
This implies
2|dw|^2_g-(_g+6)-6F(w)∈ C^k-2,α_min{4,δ},
and therefore, w ∈ C^k,α_min{4,δ}. Repeating this procedure a finite number of times yields C^k,α_δ, as desired.
To show that 𝒞 is a manifold, consider the analytic map
Ψ:ℛ^k,α_δ(M,) → C^k-2,α_δ(S^2T^*M,),
Ψ:g ↦_g-n(n-1).
The differential of this map is given by
D_gΨ(h)=D_g(h)=Δ_g( h)+div_g(div_g h)-⟨_g,h⟩_g.
In particular, if g∈𝒞 and f∈ C^k,α_δ,
D_gΨ(f· g)=(n-1)(Δ_g f+nf).
Now, the operator
Δ_g+n: C^k,α_δ(M)→ C^k-2,α_δ(M)
is an isomorphism due to <cit.>.
Thus, D_gΨ is surjective and hence, 𝒞 is an analytic manifold with D_gΨ=T_g𝒞 by the implicit function theorem for Banach manifolds.
It remains to show that the above map Φ is a diffeomorphism. We know already that Φ is smooth and bijective. We are done, if we can show that Φ is a local diffeomorphism in a neighborhood of each tuple
(w,g)∈ C^k,α_δ(M)×𝒞.
At first we have
C^k,α_δ(S^2T^*M)=C^k,α_δ(M)⊕ T_g𝒞.
for each g∈𝒞: For h∈ C^k,α_δ(S^2T^*M), there exists a unique function w∈ C^k,α_δ(M,) solving the equation
D_gΨ(h)=D_gΦ(f· g)=(n-1)(Δ_g w+nw)
and we obtain (<ref>) by writing h=w· g+(h-w· g). The differential
D_(0,g)Φ: C^k,α_δ(M)⊕ T_g𝒞→ C^k,α_δ(S^2T^*M)
directly corresponds to the splitting (<ref>) and is therefore an isomorphism. Thus, Φ is a local diffeomorphism around each tuple (1,g), g∈𝒞. For a general tuple (w_0,g)∈ C^k,α_δ(M)×𝒞, we write Φ is the composition of maps
(w,g)↦ (w-w_0,g)↦Φ(w-w_0,g)=e^2we^-2w_0g↦ e^2wg=Φ(w,g).
We see that Φ is a local diffeomorphism around (w_0,g) because Φ is a local diffeormorphism around (1,g) and the maps w↦ w-w_0 and g↦ e^2w_0g are obviously diffeomorphisms. This finishes the proof.
The functional _VR,g:g↦_VR,g(g) is an analytic functional on the analytic manifold 𝒞. Furthermore, g∈𝒞 is a critical point of _VR,g if and only if _g=-(n-1)g.
We know that g↦ S_ĝ(g) is an analytic functional on ℛ^k,α_δ(M,) which by its definition agrees with -m_VR,g:g↦ -m_VR,g(g) on 𝒞 and the first assertion follows.
For the second assertion, recall that
_L^2S_(g)=_ g-1/2_g· g-1/2(n-1)(n-2)g.
In particular, if _g=-n(n-1), _L^2S_(g) is trace-free.
Thus, any g∈𝒞 is a critical point of S with respect to conformal variations. By (<ref>), we therefore have _ g=-(n-1)g for g∈𝒞 if and only if it is a critical point of S_|_𝒞=-_VR,g|_𝒞. This proves the assertion.
We next turn to the case where g is conformal to , with _=-n(n-1). In this case both metrics are on the same manifold M and the diffeomorphism φ is simply taken to be the identity.
Let (M^n,) be a complete APE manifold with _=-n(n-1). Let w∈ C^k,α_δ(M) for some δ>n-1/2 and k+α≥δ, and
consider the conformal metric g=e^2w on M.
Then if _g≥ -n(n-1) and _g+n(n-1)∈ L^1, we have 𝔪_VR,(g) ≥ 0.
If furthermore 𝔪_VR,(g) = 0, then g =.
We may assume that the diffeomorphism φ for defining the mass satisfies φ = Id_M so that
B_R = B_R and ∂B_R=∂ B_R.
It will be convenient to substitute ϕ=e^1/2(n-2)w to work with the conformal metric in the form g=ϕ^4/(n-2). Note that ϕ-1∈ C^k,α_δ.
In terms of the conformal factor we have
𝔪_ ADM,g(g,R)
= ∫_∂ B_R(div_(g)-d_(g))(ν)_g
= ∫_∂ B_R (1-n) d(ϕ^4/(n-2))(ν)_g
= -4(n-1)/n-2∫_∂ B_Rϕ^(6-n)/(n-2) dϕ(ν)_g
and
RV_g(g,R)
= ∫_B_R_g - ∫_B_R_g
=
∫_B_R( ϕ^2n/(n-2) - 1 ) _g
so
𝔪_VR,(g)
=
lim_R →∞( m_ ADM,g(g,R)+2(n-1) RV_g(g,R) )
=
lim_R →∞(
-4(n-1)/n-2∫_∂ B_R dϕ(ν)_g
-4(n-1)/n-2∫_∂ B_R(ϕ^(6-n)/(n-2) - 1) dϕ(ν)_g
+ 2(n-1) ∫_B_R( ϕ^2n/(n-2) - 1 ) _g) .
With the decay of ϕ, the first and last terms in this expression may be infinite. However, we will see below that they combine to yield a finite quantity with the correct sign. Furthermore, the integrand of the middle term satisfies (ϕ^(6-n)/(n-2) - 1) dϕ(ν)∈ O(e^-2δ r) and because 2δ>n-1, the middle term vanishes in the limit.
To handle the first and last terms in the mass expression we will need to establish that ϕ≥ 1.
In order to do this, recall that the scalar curvatures of g and are related by
e^2w_g=_+2(n-1)Δ_w-(n-2)(n-1)|d w|_^2,
from which we have
2Δ_w
=1/(n-1)( e^2w_g-_+(n-1)(n-2)|d w|_^2 ).
Using the fact that _g ≥_=-n(n-1) we find that
2Δ_w ≥ -n(e^2w-1) + (n-2)|d w|_^2.
For the sake of contradiction assume that w<0 somewhere on M. Since we know that w→0 at infinity, there must be a minimum at some x_0 where w<0 and therefore e^2w-1<0 at x_0. In particular, this implies Δ_w>0 at x_0, which by the maximum principle contradicts the fact that x_0 was a minimum.
We therefore conclude that w≥0 and ϕ≥ 1.
Applying the divergence theorem to (<ref>) and using the fact that the middle integral vanishes in the limit, we see that
𝔪_VR,(g) =lim_R →∞
2(n-1) ∫_B_R( ϕ^2n/(n-2)-1+2/n-2Δ_ϕ) _g
=2(n-1) ∫_M( ϕ^2n/(n-2)-1+2/n-2Δ_ϕ) _g
In terms of ϕ, the scalar curvatures of g and are related by
4(n-1)/n-2Δ_ϕ = _gϕ^(n+2)/(n-2)-_ϕ
= n(n-1) ( ϕ -ϕ^(n+2)/(n-2))
+
(_g+(n-1)n)ϕ^(n+2)(n-2)
from which we find that
𝔪_VR,(g) =2(n-1)∫_M( ϕ^2n/(n-2)-1+2/n-2Δ_ϕ) _g
=∫_M[2(n-1)F(ϕ)+(_g+n(n-1))ϕ^(n+2)/(n-2)]_,
where the smooth function F:(0,∞)→ is defined by
F(x) = x^2n/(n-2)-1+n/2x-n/2x^(n+2)/(n-2).
Recall that ϕ≥ 1, so we are done with the proof if we can show F(x)≥ 0 for x≥ 1 with F(x)=0 only if x=1.
In fact, a direct computation shows that F(1)=F'(1)=0 and
F”(x)
= 2n(n+2)/(n-2)^2x^(6-n)/(n-2)( x-1 ),
hence F”(1)=0 and F”(x)>0 for x>1
and the desired statement follows.
From the identity
𝔪_VR,(g) =∫_M[2(n-1)F(ϕ)+(_g+n(n-1))ϕ^(n+2)/(n-2)]_,
we see the following directly: Because F(x)=O(x^2) as x→ 1, we obtain F(ϕ)∈ C^k,α_2δ⊂ L^1 for ϕ-1∈ C^k,α_δ, since δ>n-1/2. Thus the mass is finite if and only if (_g+n(n-1))ϕ^(n+2)/(n-1)∈ L^1
Furthermore, since ϕ is bounded, (_g+n(n-1))ϕ^(n+2)/(n-1)∈ L^1 if and only if _g+n(n-1)∈ L^1.
From Theorem <ref>, we also get a positive mass theorem for rotationally symmetric AH metrics on ^n. More precisely, consider a metric of the form
g=dr^2+ψ^2(r)sinh^2(r)g_sph,
where g_sph is the standard round metric on the sphere,
with ψ-1∈ C^k,α_δ([0,∞)) satisfying the matching conditions ψ(0)=1, ψ'(0)=0 at the origin. By solving the differential equation
ψ(r(s))sinh(r(s))=r'(s)sinh(s)
for a function r:[0,∞)→ [0,∞) in the variable s, we can rewrite the above metric in the new radial variable s as
φ^*g=(r'(s))^2[ds^2+sinh^2(s)g_sph],
where φ:(s,x)↦ (r(s),x), so that g is up to diffeormorphism conformal to the hyperbolic metric g_hyp=ds^2+sinh(s)^2g_sph. Then, if _g+(n-1)n is nonnegative, we get
𝔪_ ADM,g_hyp(g)= 𝔪_ ADM,g_hyp(φ^* g)≥0
by Theorem <ref> and Theorem <ref>, with equality if and only if g is isometric to g_hyp.
An analogue of Theorem <ref> is well known for the ADM mass of asymptotically Euclidean manifolds, (see, for example, <cit.>) Namely, if is scalar-flat and g=ϕ^4/(n-2), then
4(n-1)/n-2Δ_ϕ= _gϕ^(n+2)/(n-2)
and along the lines of the proof of Theorem <ref>, one has
𝔪_ ADM,g(g)=
lim_R →∞𝔪_ ADM,g(g,R) = -4(n-1)/n-2lim_R →∞∫_∂ B_R dϕ(ν)_g
=4(n-1)/n-2∫_MΔ_ϕ_g
=∫_M_g·ϕ^(n+2)/(n-2)_g,
provided that ϕ-1 decays sufficiently fast and _g∈ L^1. If _g≥0, we immediately get 𝔪_ ADM,g(g)≥ 0 and equality implies that Δ_ϕ=0, so that ϕ≡1 by the maximum principle and hence g=.
§.§ Positive mass theorem for AH metrics on ℝ^3
Now we turn our interest to the specific case of asymptotically hyperbolic metrics on ℝ^3. In this case, we are able to prove a positive mass theorem for the volume-renormalized mass by combining a density argument with the work of Brendle and Chodosh <cit.>, where they prove positivity of the renormalized volume under sufficiently fast decay assumptions to ensure that it is finite, in which case it agrees with the volume-renormalized mass.
Let (^3,g) be an APE manifold with g∈ℛ^k,α_δ(^3,g_hyp) for some δ>1, satisfying _g≥ -6 and _g+6∈ L^1. Then, _ VR,g_hyp(g)≥ 0 and equality holds if and only if g is isometric to g_hyp.
Let 𝒞 be as in (<ref>), with M=^3 and =g_hyp.
By Proposition <ref>, there is a function w∈ C^k,α_δ(^3,g_hyp) such that g=e^2ωg∈𝒞.
We can now apply Theorem <ref> which implies _ VR,g(g)≥ 0 or equivalently _ VR,g_hyp(g)≥ m_ VR,g_hyp(g), with equality if and only if g=g.
It thus remains to show that _ VR,g_hyp(g)≥ 0, with equality if and only if g is isometric to g_hyp.
We know that g-g_hyp∈ C^k,α_δ(S^2T^*^3,g_hyp).
Because C^∞_c is dense in C^k,α_δ, we can find a sequence {g_i}_i∈ of AH metrics on ^3 such that g_i-g_hyp is compactly supported for each i and g_i→g with respect to the C^k,α_δ-norm as i→∞.
By Proposition <ref>, we get a sequence
of metrics g_i∈𝒞 conformal to g_i, where the sequence of conformal factors w_i∈ C^k,α_δ(^3), defined by g_i=e^2w_ig_i, converges to 0 in C^k,α_δ. We have the equation
4(Δ_g_iw_i +3w_i)=2|dw_i|^2_g_i-(_g_i+6)-6F(w_i),
where F(x)=e^2x-1-2x.
Recall that _g_i+6 is compactly supported, so that w_i ∈ C^k,α_β for all β>0. We now repeat the above argument to improve the decay rate of the functions w_i. At first, |dw_i|^2_g_i-(_g_i+6)-6F(w_i)∈ C^k-2,α_2δ, so that by the isomorphism properties of Δ+3, C^k-2,α_min{2δ,3}. Repeating this step twice, we see that |dw_i|^2_g_i-(_g_i+6)-6F(ω_i)∈ C^k-2,α_β and w_i∈ H^k_β for any β< 3. Therefore, w_i∈ O(e^-β r) for any β> -3 and g_i-g_hyp∈ C^k,α_β for any β< 3. For this reason, the ADM part of the volume-renormalized mass vanishes and we have
_ VR,g_hyp(g_i)
=2(n-1)RV_g_hyp(g_i).
Furthermore, we know by the work of Brendle and Chodosh <cit.> (see also <cit.>) that RV_g_hyp(g_i)≥0. Since g_i is a sequence on 𝒞 converging to g in C^k,α_δ, we get from Corollary <ref> that
2(n-1)RV_g_hyp(g_i)=_ VR,g_hyp(g_i)→_ VR,g_hyp(g).
Consequently, _ VR,g_hyp(g)≥ 0. To finish the proof, it remains to consider the equality case: Suppose that _ VR,g_hyp(g)= 0. Then, g is a critical point of
𝒞∋ g↦ S_g_hyp(g)= - _ VR,g_hyp(g).
It follows, by Corollary <ref>, g is an Einstein metric on ^3 with _g=-2g. Because n=3, g actually has constant curvature and must therefore be isometric to g_hyp.
§ THE EXPANDER ENTROPY FOR AH MANIFOLDS
In this section we will define and study the expander entropy for AH manifolds, which turns out to be a monotone quantity for the Ricci flow in this setting. Throughout this section, we let (M,g) be (M,) be APE manifolds with isometric conformal boundaries. Assume that (M,g) is complete and that (M,) has integrable normalized scalar curvature.
Also in this section, when quantities such as norms are used without reference to a particular metric, it should be understood that the unaccented metric g is used.
§.§ Definition of the entropy
For
f∈ C^∞_c(M), let
𝒲_AH,(g,f)
=lim_R →∞(∫_B_R(
( |∇ f|^2+_g-2(n-1)f )e^-f) _g
+ (n-2)(n-1)∫_ B_Re^-f_g
+
2(n-1)∫_B̂_R_ĝ- m_ ADM,g(g,R))
=lim_R →∞(∫_B_R(
( |∇ f|^2+_g+f )e^-f_g
-2(n-1) ( (f+1)e^-f - 1 )
) _g
- m_ ADM,g(g,R)
- 2(n-1)RV_g(g,R)).
𝒲_AH,(g,f) is well-defined for any f∈ C^∞_c(M).
Substitute e^-f=ω^2 and define
𝒲_AH,(g,ω)=𝒲_AH,(g,f).
Then we get
𝒲_AH,(g,ω)
=
lim_R →∞( ∫_B_R 4|∇ω|^2 +(_g+n(n-1))ω^2
+2(n-1)[(log(ω^2)-1)ω^2+1]
) _g - RV_g(g,R) ).
Substitute further u= ω - 1 and let
𝒲_AH,(g,u)=𝒲_AH,(g,ω).
In addition, set G(x)=2(n-1) ( (log((x+1)^2)-1)(x+1)^2+1 ). Note that G is nonnegative and G(0)=0. Then,
𝒲_AH,(g,u) =lim_R →∞(∫_ B_R( 4|∇ u|^2+(_g +n(n-1))(u+1)^2 + G(u) )_g
-( m_ ADM,g(g,R)+2(n-1)RV_g(g,R) ))
=
∫_M ( 4|∇ u|^2+(_g +n(n-1))u^2
+2(_g +n(n-1))u+G(u) ) _g
+ S_(g),
where S_(g) is well-defined and finite by Theorem <ref>. Since f∈ C^∞_c(M) we have ω-1∈ C^∞_c(M) so u∈ C^∞_c(M) and G(u)∈ C^∞_c(M), and the integral is finite, completing the proof.
The expander entropy of (M,g) relative to (M,) is defined as
μ_AH,(g)
=inf_ f∈ C_c^∞(M)𝒲_AH,(g,f)
=inf_ u∈ C_c^∞(M)𝒲_AH,(g,u).
In the compact setting (see, for example, <cit.>), the expander entropy is given by
μ(g)=sup_σ>0inf_f∈ C^∞{𝒲(g,f,σ)|∫_Me^-f_g=1},
where σ>0 serves as an additional parameter capturing rescalings of the metrics. For the specific choice σ=2/n-1, the Lagrangian is (up to scaling and adding a constant) given by
𝒲(g,f) =∫_M(
(|∇ f|^2+_g-2(n-1)f)e^-f) _g
+ (n-2)(n-1)∫_Me^-f_g,
which are the first two lines on the right hand side in the definition of 𝒲_AH,(g,f). The remaining terms there turn out to be the right normalization for the AH setting.
The proof of Lemma <ref> implies that
μ_AH,(g) =S_(g)+inf_u∈ C^∞_c(M)∫_M ( 4|∇ u|^2+(_g +n(n-1))u^2
+2(_g +n(n-1))u+G(u) ) _g ,
where G(x)=2(n-1) ( (log((x+1)^2)-1)(x+1)^2+1 ).
We obtain an additivity of the functional μ_AH,: Suppose that (M,g̃) is another APE manifold with integrable normalized scalar curvature whose conformal boundary is isometric to the one of (M,g). Then the additivity of the renormalized Einstein–Hilbert action in Corollary <ref> immediately gives
μ_AH,g̃(g)=μ_AH,(g)+m_VR,(g̃).
Thus, variational properties of the functional g↦μ_AH,(g) are independent of the choice of reference metric.
§.§ Finding a minimizer
We are going to show that the infimum in the definition μ_AH,(g) is always achieved by a unique C^k,α-function f_g and that f_g and hence also μ_AH,(g) depend analytically on the metric g.
For sufficiently small ϵ>0 there are positive constants C_ϵ,C, depending on g and , such that we have the estimate
Cu_H^1^2 - C
≤𝒲_AH,(g,u)- S_(g)
≤
C_ϵ (1+u_H^1^ϵ) u_H^1^2 + C
for all u∈ C^∞_c(M). In particular,
μ_AH,(g)
= inf_ u∈ H^1(M)𝒲_AH,(g,u)
and μ_AH,(g)>-∞.
Recall that G(x)=2(n-1) ( (log((x+1)^2)-1)(x+1)^2+1 ).
This function satisfies G(x) ≥ 0 for all x∈ and G(x)=0 if and only if x∈{0,2}. Taylor expansion at x=0 shows that G(x)=4(n-1)x^2+O(|x|^3) as x→ 0, whereas we clearly see that G(x)=O(log(|x|)x^2) as |x|→∞. Summarizing these estimates, we get for any given ϵ>0 a constant C_ϵ>0 such that
G(x)≤ 4(n-1)x^2+C_ϵ|x|^2+ϵ.
Therefore, we have
𝒲_AH,(g,u) - S_(g)
=
∫_M ( 4|∇ u|^2+(_g +n(n-1))u^2 +2(_g +n(n-1))u+G(u) ) _g
≤∫_M
( C(|∇ u|^2+u^2)+(_g +n(n-1))^2 + C_ϵ u^2+ϵ) _g
≤
C_ϵ (1+u_H^1^ϵ) u_H^1^2
+_g +n(n-1)_L^2^2
making use of L^2+ϵ⊂ H^1 for sufficiently small ϵ>0 and
the fact that
_g+n(n-1)∈ C^k-2,α_δ⊂ L^2 for δ>n-2/2, which gives us the upper bound.
Now let us find the lower bound. From <cit.> we know that there is a gap around zero in the essential spectrum of the scalar Laplacian Δ. Since there is also no zero eigenvalue, we get a lower bound Δ≥Λ > 0. Therefore, by the Peter–Paul inequality,
𝒲_AH,(g,u) - S_(g)
=
∫_M ( 4|∇ u|^2+(_g +n(n-1))u^2 + 2(_g +n(n-1))u+G(u) ) _g
≥∫_M ( 2|∇ u|^2 + 2 Λ u^2+(_g +n(n-1))u^2 + G(u) ) _g
-∫_M ( ϵ u^2+1/ϵ(_g +n(n-1))^2 ) _g
=∫_M ( 2|∇ u|^2 + (2Λ + _g +n(n-1) - ϵ) u^2 + G(u)) _g
-1/ϵ_g +n(n-1)_L^2^2
for any ϵ>0. Recall that _g +n(n-1) converges to 0 at infinity. Therefore the set
K={x∈ M | 2 Λ + _g +n(n-1) - ϵ≤ 0}
is compact, provided that we chose ϵ>0 such that ϵ < 2Λ. Since G is a nonnegative function and since the scalar curvature is bounded from below, there exists a constant C>0 such that
𝒲_AH,(g,u) - S_(g)
≥ 2
∫_M|∇ u|^2_g+∫_K (G(u)-C u^2)_g
-1/ϵ_g +n(n-1)_L^2^2
Since the function G(x) grows faster than quadratic,
we have G(x) - C x^2 ≥ -C_1 for some constant C_1 > 0. Thus,
𝒲_AH,(g,u) - S_(g)
≥ 2
∫_M|∇ u|^2_g - C_1 (K,g)
-1/ϵ_g +n(n-1)_L^2^2
≥∫_M
( |∇ u|^2 + Λ u^2 ) _g
- C_1 (K,g)
-1/ϵ_g +n(n-1)_L^2^2
≥
C_2 u_H^1^2
- C_1 (K,g)
-1/ϵ_g +n(n-1)_L^2^2.
This proves the desired bound.
The Euler–Lagrange equation of the functional 𝒲_AH,(g,f) is
2Δ f+|∇ f|^2--n(n-1)+2(n-1)f=0
For v∈ C^∞_c(M) we get
d/dt𝒲_AH,(g,f+tv)|_t=0
=
d/dt∫_M
( (|∇ f|^2+_g +n(n-1)) e^-f -2(n-1)( (f+1)e^-f-1) )
|_t=0
=
∫_M
(
2⟨∇ f,∇ v⟩ -v ( |∇ f|^2 + + n(n-1) )
)
e^-f
-2(n-1) ∫_M (ve^-f-(f+1)ve^-f)
=
∫_M ( 2Δ f+|∇ f|^2- -n(n-1)+2(n-1)f ) ve^-f,
where we used integration by parts in the last equality.
Next, we are going to discuss existence and uniqueness of solutions of the equation (<ref>) with suitable conditions at infinity.
We begin by proving uniqueness of solutions.
There exists at most one solution f of (<ref>) such that f∈ C^k,α(M) and f→ 0 at infinity.
Moreover, for every bounded domain Ω with smooth boundary, there is at most one solution of (<ref>) such that
f∈ C^k,α(Ω)∩ C^0(Ω)
and f|_∂Ω≡ 0.
Suppose f_1,f_2 are solutions of (<ref>). The difference f_0=f_1-f_2 satisfies
2Δ f_0+⟨∇(f_1+f_2), ∇ f_0 ⟩ + 2(n-1)f_0=0.
The function f_0 converges to 0 at infinity, or is zero on the boundary of Ω, so it either vanishes identically or it has a maximum or a minimum in the interior.
If p is an interior point where f_0 attains a maximum, we have f_0(p)>0,
∇ f_0(p)=0 and Δ f_0(p)≥0 which contradicts (<ref>). The argument for the minimum is analogous. Therefore, f_0=0 in both situations, which proves uniqueness.
For a bounded domain Ω⊂ M, we define
𝒲_Ω(g,u) =∫_Ω( 4|∇ u|^2+(_g +n(n-1))u^2) _g
+2∫_Ω( (_g +n(n-1))u+G(u) ) _g.
with associated localized entropy
μ_Ω(g)=inf_u∈ C^∞_c(Ω)𝒲_Ω(g,u).
Note that the definition of 𝒲_Ω(g,u) is set up in such a way that
𝒲_Ω(g,u)=
𝒲_AH,(g,u)- S_(g)
for all u∈ C^∞_c(Ω)⊂ C^∞_c(M).
Let Ω⊂ M be a bounded domain with smooth boundary. Then there exists a function u that realizes the infimum of 𝒲_Ω(g,·). The function f such that
e^-f=(u+1)^2
is then the unique solution of (<ref>) such that f|_∂Ω≡ 0.
From (<ref>) and Lemma <ref>, we get
C(u_H^1^2-1)
≤𝒲_Ω(g,u) ≤
C [(1+u_H^1^ϵ) u_H^1^2+1].
Thus,
μ_Ω(g)=inf_u∈ H^1_0(Ω)𝒲_Ω(g,u)>-∞.
Let u_i be a minimizing sequence for μ_Ω(g). The sequence is obviously bounded in H^1, hence there exists a subsequence, again denoted by u_i which converges weakly in H^1 and strongly in L^p for some fixed p<2n/n-2 to a function u∈ H^1_0(Ω). The functional u↦𝒲_Ω(g,u) is lower semicontinuous in H^1_0(Ω). Therefore,
μ_Ω(g)≤𝒲_Ω(g,u) ≤lim inf_i→∞𝒲_Ω(g,u_i)≤μ_Ω(g).
Consequently, u∈ H^1_0(Ω) is the desired minimizer. By variational calculus, the function u is a (weak) solution of (<ref>). Standard arguments involving elliptic regularity and Sobolev embedding yield u∈ C^k,α and uniqueness holds due to Lemma <ref>.
Let Ω⊂ M be a bounded domain with smooth boundary and f∈ C^k,α(Ω)∩ C^0(Ω) be a solution of (<ref>) with f|_∂Ω≡ 0.
Then
1/2(n-1)inf_M(_g +n(n-1))
≤ f ≤1/2(n-1)sup_M(_g +n(n-1)).
Let x∈Ω be the point where f attends its maximum. It follows that ∇ f(x)=0 and Δ f(x)≥0. From (<ref>) we get
0 =
2Δ f(x) + |∇ f|^2-(x)-n(n-1)+2(n-1)f(x)
≥
2(n-1)f(x)-(x)-n(n-1)
which implies the upper bound. The argument for the lower bound is analogous.
We can now prove existence on the whole manifold.
There are unique bounded functions u_g,f_g∈ C^k,α(M) with e^-f_g=(u_g+1)^2, such that
μ_AH,(g)=𝒲_AH,(g,f_g)=
𝒲_AH,(g,u_g).
In other words, there exists unique functions realizing the infimum in the definition of the entropy.
Let Ω_i be a sequence of bounded domains with smooth boundaries such that Ω_i⊂Ω_i+1 and ∪_i=1^∞Ω_i=M. Let the sequence f_i be the solutions of (<ref>) such that f_i∈ C^k,α(Ω_i)∩ C^0(Ω_i) and f_i|_∂Ω_i≡ 0. By Lemma <ref>, we have uniform bounds
1/2(n-1)inf_M(_g +n(n-1))
≤ f_i ≤1/2(n-1)sup_M(_g +n(n-1))
Thus, there exists a subsequence, which we also denote by f_i, which converges locally uniformly in C^k,α to a function f_g defined on the whole manifold, which again solves (<ref>). Note that f necessarily satisfies the same bounds.
It remains to show that f_g is the minimizer of 𝒲_AH,(g,·) or equivalently, the associated function u_g is the minimizer of 𝒲_AH,(g,·). By domain monotononicity we have
μ_Ω_i(g)≥μ_Ω_i+1(g)
and since ∪_i=1^∞Ω_i=M we get from (<ref>) that
lim_i→∞μ_Ω_i(g) + S_(g) = μ_AH,(g).
In particular, we have an upper bound on μ_Ω_i(g)
Let u_i be the minimizer of 𝒲_Ω_i(g,·).
Then,
μ_AH,(g) - S_(g)
=lim_i→∞μ_Ω_i(g)
=lim_i→∞𝒲_Ω(g,f_i)
=lim_i→∞𝒲_Ω(g,u_i)
=
lim_i→∞∫_M ( 4|∇ u_i|^2+(_g +n(n-1))u_i^2
+2(_g +n(n-1))u_i+G(u_i)) _g.
As in the proof of Lemma <ref>, we can estimate
∫_M
| 4|∇ u_i|^2+(_g +n(n-1))u_i^2 + 2(_g +n(n-1))u_i+G(u_i) |
_g
≤∫_M ( 4|∇ u_i|^2+|_g +n(n-1)|u_i^2
+2|_g +n(n-1)| · |u_i|+|G(u_i)| )_g
≤ C ( 1+(u_i_H^1^2)^ϵ/2)
u_i_H^1^2+C
≤ C (1 + C(𝒲_Ω_i(g,u_i)+1)^ϵ/2)
(𝒲_Ω_i(g,u_i)+1)
≤
C( 1+C(μ_Ω_i(g)+1)^ϵ/2) (μ_Ω_i(g)+1)
≤ C.
Note that from the third to fourth line, we used the lower bound in Lemma <ref>. Note also that the involved constants are independent of Ω_i. Thus, since u_i→ u_g locally uniformly in all derivatives, we can apply the dominated convergence theorem and conclude
μ_AH,(g)
=
lim_i→∞∫_M
(4|∇ u_i|^2+(_g +n(n-1))u_i^2
+2(_g +n(n-1))u_i+G(u_i) )
_g+ S_(g)
=
∫_M
(4|∇ u_g|^2+(_g +n(n-1))u_g^2
+2(_g +n(n-1))u_g+G(u_g) ) _g
+ S_(g)
=𝒲_AH,(g,u_g)=𝒲_AH,(g,f_g),
which is the desired result.
The next result we are going to prove concerns the asymptotics of the minimizing function f_g. For this purpose, we need a preperational lemma.
For each constant C>0 there exist a positive subsolution f_+ and a negative supersolution f_- of the equation
2Δ f+|∇ f|^2--n(n-1)+2(n-1)f=0,
both defined on the complement M∖ B_R of a large ball B_R, such that ± f_±|_∂ B_R> C and f_±=O(e^-2μ r) for some μ>0.
Note that due to the sign convention for the Laplacian, f is a subsolution (resp. supersolution) of (<ref>) if
2Δ f+|∇ f|^2--n(n-1)+2(n-1)f_+≥ 0 (resp ≤ 0).
Given a metric σ_0 in the conformal boundary of (M,g), we can choose (according to <cit.>) a boundary defining function ρ such that on a neighborhood U⊂ M near ∂ N, g is of the form
g=ρ^-2(dρ^2+σ_ρ),
where ρ↦σ_ρ is a C^k,α family of Riemannian metrics on ∂ N.
By writing ρ=e^r, we obtain
g=dr^2+e^2rσ_r.
Note that ∂_rh=-e^ -r∂_ρσ, so that |∂_rσ|_σ=0(e^-r) as r→∞. The Laplacian of g expands as
Δ_gf=-∂^2_rrf-(n-1)∂_rf-1/2_σ∂_rσ·∂_r f+e^-2rΔ_σf.
Suppose without loss of generality that U={x∈ M| r(x)>R_0} for some radius R_0 and let f:U→ be a function only depending on r. Define two functions on M∖ B_R_0 by a=_σ(∂_r σ) and b=_g+n(n-1).
Then, f is a subsolution (resp. supersolution) of (<ref>) if and only if
-2∂^2_rrf-2(n-1)∂_rf-a·∂_rf+(∂_rf)^2-b+2(n-1)f≥ 0 (resp.≤ 0)
With the ansatz f_+(r)=λ e^-μ r, f_+ is a subsolution if and only if
λ [-2μ^2+2μ(n-1)+2(n-1)]e^-μ r≥ b-λμ ae^-μ r+(λμ)^2e^-2μ r.
We know that there exist constants A,B≥0 such that |a|≤ A e^-r and |b|≤ B e^-δ r. Choose R≥ R_0.
Choose λ>max{C,[2(n-1)]^-1Be^-δ R}, where C>0 is the constant in the statement of the lemma. Now choose μ∈ (0,δ] so small that
λ e^-2μ R > C,
λ [-2μ^2+2μ(n-1)+2(n-1)]e^-μ R ≥ Be^-δ R+Aλμ e^-(μ+1)R
+μ^2λ^2e^-2μ R
Because μ<δ, we get for all r≥ R that
λ [-2μ^2+2μ(n-1)+2(n-1)]e^-μ r ≥ B e^-δ r+Aλμ e^-(μ+1) r+(λμ)^2e^-2μ r
≥ b-λμ ae^-μ r+(λμ)^2e^-2μ r
for all r≥ R. Thus for these choices of μ and λ, f_+ is a positive subsolution of (<ref>) on M∖ B_R with f_+(R)> C.
Similarly, with the ansatz f_-(r)=-λ e^-μ r, f_- is a supersolution if and only if
-λ [-2μ^2+2μ(n-1)+2(n-1)]e^-μ r≤ b+Aλμ e^-μ r+(λμ)^2e^-2μ r.
Let λ, μ and R as before. Then,
-λ [-2μ^2+2μ(n-1)+2(n-1)]e^-μ r ≤ -B e^-δ r-ACμ e^-(μ+1) r
≤ b-λμ ae^-μ r+(λμ)^2e^-2μ r
for all r≥ R and f_- is a negative supersolution of (<ref>) on M∖ B_R with f_-(R)< -C.
Let f be the minimizer given by Theorem <ref>. Then, f∈ O(e^-2μ r) for some μ>0.
Let {Ω_i}_i∈ be an exhaustion of M by compact subsets with smooth boundary and let f_i be the solutions of the associated Dirichlet problem to (<ref>) provided by Proposition <ref>. By Lemma <ref>, |f_i|≤ C for some constant C>0.
Let f_+ be the positive subsolution and f_- the negative supersolution of (<ref>) of Lemma <ref> on M∖ B_R with ± f_±|_∂ B_R> C. We are done with the proof if we are able to show f_-≤ f≤ f_+. Because the f_i subconverge to f locally uniformly, it suffices to show f_-≤ f_i≤ f_+ all i with B_R⊂Ω_i and ∂ B_R∩∂Ω_i=∅
so that ∂ B_R is the inner boundary and ∂Ω_i is the outer boundery of the set Ω_i∖ B_R on which all functions f_i and f_± are defined.
We consider the inequality f_i≤ f_+, the other one is completely analogous.
Assume that there exists an i for which the inequality f_i≤ f_+ fails to hold. We know that f_i=0<f_+ on ∂Ω_i and f_i≤ C<f_+ on ∂ B_R, so that the inequality fails in the interior. Certainly, we have f_++C>C>f_i on Ω_i∖ B_R. Therefore, there exists a smallest C_0>0 for which the inequality f_++C_0≥ f_i holds, and a point x in the interior of the compact set Ω_i∖ B_R such that f_+(x)+C_0=f_i(x). At this point , we nessecarily have ∇ f_+(x)=∇ f_i(x) and Δ f_i(x)≥Δ f_+(x). However, because f is a solution and f_+ a subsolution, we get
0 =2Δ f_i(x)+|∇ f_i(x)|^2-(x)-(n-1)n+2(n-1)f_i(x)
≥ 2Δ f_+(x)+|∇ f_+(x)|^2-(x)-(n-1)n+2(n-1)(f_+(x)+C_0)
≥ 2(n-1)C_0,
which contradicts C_0>0.
Recall that under the assumptions we made at the beginning of this section, (M,g) ia asymptotic to (M,) of order δ>n-1/2. Without loss of generality, we may assume for the remainder of this subsection that δ<n-1/2+1/2√((n+3)(n-1)).
Let f be the minimizer given by Theorem <ref>.
Then, f∈ C^k,α_δ(M).
By substituting u+1=e^-f/2, equation (<ref>) transforms to
4Δ u=-(_g +(n-1)n)u--(n-1)n-H(u),
where
H(u)=2(n-1)log((u+1)^2)(u+1).
Note that for every ϵ>0, there is a constant C_ϵ such that
| H(u) |
≤ C_ϵ(|u| + |u|^1+ϵ).
By Lemma <ref>, f=O(e^-μ r) for some μ∈ (0,δ). Thus, we also get u=O(e^-μ r), so that u∈ C^0_μ (and hence also H(u)∈ C^0_μ).
Choose η∈ (0,μ) and sufficiently large p∈ (n,∞) such that η+n-1/p<μ. Then, u∈ L^p_η. Because _g +(n-1)n∈ O (e^-δ r) with δ>η and H(u)=O(e^-μ r), we get that
(_g +(n-1)n)u--(n-1)n-H(u)∈ L^p_η,
and u∈ W^2,p_η⊂ C^1,α_η by elliptic regularity applied to (<ref>) and Sobolev embedding (see <cit.>. Hence, we also have f∈ C^1,α_η. Now we can use the equation on f to succesively improve the decay of f.
Let us write equation (<ref>) as
2Δ f+2(n-1)f=-|∇ f|^2++n(n-1)
We have that |∇ f|^2∈ C^0,α_2η and +n(n-1)∈ C^k-2,α_δ. By <cit.>, Δ +(n-1):C^l,α_β→ C^l-2,α_β is an isomorphism for all 2≤ l≤ k and
n-1/2-1/2√((n+3)(n-1))< β< n-1/2+1/2√((n+3)(n-1)) .
Thus we get f∈ C^k,α_η_1 for 0<η_1≤min{2η,δ}, since we assume δ<n-1/2+1/2√((n+3)(n-1)). Using the above arguments again, we obtain f∈ C^k,α_η_2 for 0<η_2≤min{2η_1,δ}.
Now we can repeat this procedure again, and after a finite number of iterations, we obtain the desired result.
We now prove that the entropy depends analytically on the Riemannian metric.
The map
ℛ^k,α_δ(M) ∋
g↦ f_g ∈ C^k-2,α_δ(M),
associating to each metric g the unique minimizer in the definition of μ_AH,(g) is analytic. In particular, ℛ^k,α_δ(M) ∋ g↦μ_AH,(g) is analytic.
We consider the map Φ: ℛ^k,α_δ(M)× C^k,α_δ(M) → C^k-2,α_δ(M), given by
Φ(g,f) = 2Δ f+|∇ f|^2--n(n-1)+2(n-1)f.
The differential of Φ in the second argument is given by
D_(g,f)Φ(0,v)=2Δ v+2⟨∇ f,∇ v⟩+2(n-1)v.
By the implicit function theorem, we are done with the proof, if we are able to prove that the map
P_g,f = D_(g,f)Φ(0,.): C^k,α_δ(M) → C^k-2,α_δ(M)
is an isomorphism for each f∈ C^k,α_δ. In fact, an integration by parts argument with respect to the weighted measure e^-f quickly shows that P_g,f has trivial kernel.
It remains to show that it is surjective. For this purpose,
one shows by a quick computation that
P_g,f = e^f/2∘ Q_g,f∘ e^-f/2,
where
Q_g,f(v) = 2Δ v +2(n-1)v+(Δ f+1/2|∇ f|^2)v.
By <cit.> and the assumptions on δ,
2Δ +2(n-1): C^k,α_δ(M) → C^k-2,α_δ(M)
is Fredholm of index zero. Because Δ f+1/2|∇ f|^2∈ O(e^-δ r) as r→∞, the operator Q_f has the same indicial root as 2Δ +2(n-1). Thus by <cit.>,
Q_g,f: C^k,α_δ(M) → C^k-2,α_δ(M) is also a Fredholm operator of index zero. Because multiplication with e^± f is an isomorphism, P_g,f is also Fredholm of index zero. Since its kernel is zero, P_g,f is an isomorphism, as desired.
§.§ First and second variation
We have
D_gμ_AH,[h]=-∫_M ⟨+∇^2f_g+(n-1)g,h⟩ e^-f_g ,
where f_g is the minimizing metric in the definition of μ_AH,(g).
If h∈ C^k,α_δ, then by Proposition <ref>, v=d/dt|_t=0f_g+th∈ C^k,α_δ(M). In order to avoid cumbersome notation, let us write f=f_g in the rest of the proof.
An approximation argument shows that the computation in the proof of
Proposition <ref>
is also valid for v∈ C^k,α_δ. Thus, because f is the minimizer,
D_g,f𝒲_AH,[0,v]=0,
and the chain rule implies
D_gμ_AH,[h]
= D_g,f𝒲_AH,[h,v]
= D_g,f𝒲_AH,[h,0]+D_g,f𝒲_AH,[0,v]
= D_g,f𝒲_AH,[h,0].
Using this and the first variation of the scalar curvature, we compute
D_gμ_AH,[h]
=d/dt𝒲_AH,(g+th,f) |_t=0
=d/dt∫_M
(
(|∇ f|^2+ +n(n-1) )e^-f - 2(n-1)((f+1)e^-f - 1 ) ) |_t=0
-d/dt_VR,(g) |_t=0
=∫_M
( -⟨ h,∇ f⊗∇ f⟩
+ Δ h + div(div h) - ⟨,h⟩) e^-f_g
+1/2∫_M
( ( |∇ f|^2+ +n(n-1) ) e^-f - 2(n-1)((f+1)e^-f) ) h
-
lim_η→∞∫_∂ B_R⟨div h-∇ h,ν⟩.
By integration by parts over a large ball, we have
∫_B_RΔ( h)e^-f = ∫_B_R hΔ(e^-f) - ∫_∂ B_R⟨∇ h,ν⟩ e^-f
- ∫_∂ B_R h⟨∇ f,ν⟩ e^-f
=-∫_B_R h(Δ f+|∇ f|^2) e^-f - ∫_∂ B_R⟨∇ h,ν⟩ e^-f
- ∫_∂ B_R h⟨∇ f,ν⟩ e^-f
and
∫_B_Rdiv(div h)e^-f =∫_B_R⟨ h,∇^2(e^-f)⟩
+ ∫_∂ B_R⟨div h+h(∇ f,·),ν⟩ e^-f
=∫_B_R⟨ h,∇ f⊗∇ f-∇^2f⟩ e^-f
+∫_∂ B_R⟨div h+h(∇ f,·),ν⟩ e^-f.
Because h∈ C^k,α_δ and f∈ C^k,α_δ for some δ>n-1/2, we have
| ∫_∂ B_R h⟨∇ f,ν⟩ e^-f|
+
| ∫_∂ B_R⟨ h(∇ f,·),ν⟩ e^-f|
→ 0
as well as
| ∫_∂ B_R⟨∇ h,ν⟩(1- e^-f)|
+
| ∫_∂ B_R⟨div h,ν⟩ (1 - e^-f) |
→ 0
as R→∞.
Therefore, all the boundary terms vanish and after summing up, we obtain
D_gμ_AH,[h]
=
-∫_M h(Δ f+|∇ f|^2) e^-f-∫_M ⟨∇^2f+,h⟩ e^-f
+1/2∫_M {[|∇ f|^2+ +n(n-1)]e^-f
-2(n-1)[(f+1)e^-f]} h
=
-∫_M ⟨∇^2f+,h⟩ e^-f
-(n-1)∫_M h e^-f
+1/2∫_M [-2Δ f-|∇ f|^2+ +n(n-1)-2(n-1)f]e^-f h.
Using the Euler–Lagrange equation (<ref>), the last integral on the right hand side vanishes and we obtain the first variation formula as stated.
It follows that critical points of the expander entropy are precisely the Einstein metrics. In what follows we will continue to use f to denote the minimizing metric in the definition of μ_AH,(g)
An metric g∈ℛ_δ^k,α is a critical point of μ_AH, if and only if it is PE.
A critical point of μ_AH, satisfies by definition
+∇^2 f=-(n-1)g
or equivalently
-1/2· g=-∇^2f-1/2Δ f· g-(n-1)g+n/2(n-1)g
By the second Bianchi identity, the left-hand side is divergence free, so that
0 =div∇^2f+1/2div(Δ f· g)=
-Δ∇ f+1/2∇(Δ f)
=-Δ∇ f+1/2(Δ+)(∇ f)
=-(Δ+∇^2f+n-1/2)(∇ f)
=-(Δ+∇_∇ f+n-1/2)(∇ f).
Since ∇ f∈ H^1, taking the scalar product of the equation with ∇ f itself and integrating over M with respect to the weighted measure e^-f yields ∇ f=0. Thus, ∇^2f=0 which implies that =-(n-1)g, as desired.
We have
D_gμ_AH,[ℒ_Xg]=0
for any X∈ C^k-1,α_δ(TM).
Let f=f_g be the minimizer in the definition of μ_AH, and let u=e^-f/2-1. By the proof of Lemma <ref>,
μ_AH,(g) =𝒲_AH,(g,u)
=
∫_M ( 4|∇ u|^2+(_g +n(n-1))u^2
+2(_g +n(n-1))u+F(u) ) _g
+ S_(g),
where F(x)=2(n-1) ( (log((x+1)^2)-1)(x+1)^2+1 ).
By density, it suffices to show the lemma for X∈ C^k+1,α_δ(TM). Let φ_t be the diffeomorphisms generated by X and g_t=φ_t^*g. By diffeomorphism invariance of the Euler–Lagrange equation (<ref>), f_g_t=f_g∘φ_t, so that also u_g_t=u_g∘φ_t. Furthermore, Corollary <ref>
implies that S_(g_t) is constant along t. Using this together with u_g_t=u_g∘φ_t in (<ref>) implies that also μ_AH,(g) is constant in t. Differentiating at t=0 implies the desired result.
Until now, we have mentioned Ricci flow on AH manifolds but have yet to discuss it in detail. In the AH setting, it natural to normalize the Ricci flow such that it is given by
d/dtg_t=-2_g_t+2(n-1)g_t.
Well-posedness of(<ref>) in the AH setting has been established by Bahuaud <cit.>.
While Bahuaud only considers
AH metrics with smooth compactifications, it seems that these results can be extended to AH metrics of clas C^k,α_δ along the lines of <cit.>. Thus if g_0∈ℛ^k,α_δ(M,), we expect to have a unique solution g_t of (<ref>) starting at g_0 such that g_t∈ℛ^k,α_δ(M,).
Let g_t be a family of metrics in ℛ^k,α_δ(M,) that solves (<ref>). Then, the function t↦μ_AH,(g_t) is monotonically increasing. It is strictly monotonically increasing unless g_t≡ g is a (constant) family of PE metrics.
By Lemma <ref>, D_gμ_AH,[∇^2 f]=0 for f∈ C^k,α_δ. Now if {g_t}_t∈ I is a solution of the Ricci flow, we therefore get from Proposition <ref> that
d/dtμ_AH,(g_t)=2∫_M | _g_t+∇^2f_g_t+(n-1)g_t|_g_t^2 e^-f_g_t_g_t≥0.
Therefore, μ_AH, is monotonically increasing along the asymptotically hyperbolic Ricci flow. The equality case follows from Corollary <ref>.
Next, we compute the second variation of the expander entropy at an Einstein metric.
In the following we will make use of the Lichnerowicz Laplacian Δ_L and the Einstein operator Δ_E, which act on symmetric tensor fields h by
Δ_L h_ij =Δ h_ij+_ikh_j^k+_jkh_i^k-2h^klR_iklj,
and
Δ_Eh_ij =Δ h_ij-2h^klR_iklj,
where we use abstract index notation here for clarity, and R_iklj is the Riemann tensor. Note that if the underlying manifold is Einstein with =λ g then we simply have Δ_Lh=(Δ_E+2λ)h.
For the next proposition, assume without loss of generality that we have n-1/2<δ<n-1/2+1/2√((n+3)(n-1)).
Let (M,g) be a complete PE manifold. Then the second variation of μ_AH, at g is given by
D^2_gμ_AH,[h,h]=
-1/2∫_M⟨Δ_E h,h⟩, if div h=0,
0, if h=ℒ_Xg for X∈ C^k+1,α_δ(TM).
Moreover, D^2_gμ_AH, is as a bilinear map diagonal with respect to the L^2-orthogonal decomposition
C^k,α_δ(S^2T^*M)=
ker(div)
⊕{ℒ_Xg| X∈ C^k+1,α_δ(TM)}.
We first recall the first variation of the Ricci tensor (see, for example, <cit.>) is given by
D_g[h]=1/2( Δ_L h+ℒ_divhg-∇^2 h ).
From this we compute for h∈ C^k,α_δ(S^2T^*M) with div h=0,
d^2/dt^2|_t=0μ_AH,(g+th) =-d/dt|_t=0∫_M⟨+∇^2f)+(n-1)g,h⟩ e^-f
=-∫_M⟨ D_g(h)+∇^2f'+(n-1)h,h⟩ e^-f
=-∫_M⟨1/2[Δ_Lh+ℒ_divhg-∇^2 h]+∇^2f'+(n-1)h,h⟩ e^-f
=-1/2∫_M⟨Δ_Lh+∇^2(2f'- h)+2(n-1)h,h⟩,
where f'=d/dt|_t=0(f_g+th), and here we also used that f≡ 0. Now,
let us compute f'. In order to do this, we differentiate the Euler–Lagrange equation
2Δ f+ |∇ f|^2--n(n-1)+2(n-1)f=0
which yields, because f is constant and div h=0,
0 =2Δ f'-D_g(h)+2(n-1)f'
=2(Δ+(n-1))(f')-[Δ h+δ(δ h)-⟨,h⟩]
=2(Δ+n-1)(f')-(Δ+n-1) h.
Recall that the operator Δ +n-1: C^k,α_δ(M)→ C^k-2,α_δ(M) is an isomorphism. This implies 2f'= h and yields
D^2_gμ_AH,[h,h]=-1/2∫_M⟨Δ_E h,h⟩,
which proves the variational formula for the case div h=0.
For the other case, let h=ℒ_Xg for some X∈ C^k+1,α_δ(TM) and k∈ C^k,α_δ(S^2T^*M) be arbitrary. Let φ_t be the diffeomorphism generated by X and consider the 2-parameter family of metrics g_t,s=φ_t^*(g+s· k). Then by Lemma <ref>, μ_AH,(g_t,s) only depends on s, so that
D^2_gμ_AH,[h,k]=D^2_gμ_AH,[k,h]=d^2/dsdt|_s,t=0μ_AH,(g_t,s)=0.
Together with the fact that Δ_E,g preserves the splitting (<ref>) (see <cit.>), the second variational formula as well as the orthogonalty statement follow.
§ A LOCAL POSITIVE MASS THEOREM
From this point onward, we will take the reference metric ĝ to be a complete PE metric on the manifold M=M. We consider the functional g↦μ_AH,(g) with respect to the reference metric and on the space ℛ^k,α_δ(M,), with δ>n-1/2.
§.§ Local maxima of the entropy
In order to establish a criterion for local maximality of μ_AH, at , we need to take into account that our second variation has an infinite-dimensional kernel. However due to diffeomorphism invariance, a subspace of the kernel of finite codimension consists of Lie derivatives of the metric. We use this fact by first restricting our attention to the affine subspace
𝒮_={g∈ℛ^k,α_δ(M,)|div_g=0}.
Let Diff^k+1,α_δ(M) be the diffeomorphisms generated by the vector fields X∈ C^k+1,α_δ(TM).
There exists a C^k,α_δ-neighborhood 𝒰 of in the space of metrics such that any g∈𝒰 can be uniquely written as g=φ^*g for some g∈𝒰∩𝒮_ and a diffeomorphism φ∈Diff^k+1,α_δ(M) that is C^k+1,α_δ-close to the identity.
Consider the smooth map
Φ:𝒮_×Diff^k+1,α_δ(M) → C^k,α_δ(S^2_+M),
(g,φ) ↦φ^*g.
Its differential at (,id) exactly corresponds to the decomposition (<ref>) (with respect to g).
Therefore the assertion is an immediate consequence of the inverse function theorem.
The PE manifold (M,g) is called linearly stable if Δ_E≥0 in the L^2-sense. It is called integrable, if there exists a C^k,α_δ-neighborhood 𝒰 of g in the space of metrics such that
ℰ={g∈𝒰∩𝒮_g|_g=-(n-1)g}
is a smooth manifold with
T_ℰ=ker_L^2(Δ_E).
In order to prove a statement of local maximality for the entropy, we also need to control some error terms.
There exists a H^k-neighborhood of g in the space of metrics and a constant C>0 such that
| d^3/dt^3μ_AH,(g+th)|_t=0|≤ C h_C^k,α_δ h_H^1^2
for all g∈𝒰.
An analogous estimate was established in <cit.> for the expander entropy in the compact setting.
Large part of the estimate follows from tedious standard computations. The nontrivial part is to establish estimates on first and second variations of the minimizing function f_g.
By differentiating (<ref>) twice, one sees that the defining equations for v=d/dt|_t=0f_g+th and w=d^2/dt^2|_t=0f_g+th are of the form
P_g,f_g(v) =(*) P_g,f_g(w)=(**)
for some right hand sides (*),(**). Here, P_g,f_g is the operator defined in (<ref>). In the proof of Proposition <ref>, we show that P_g,f_g:C^k,α_δ→ C^k-2,α_δ is an isomorphism. Because of that, we can carry out all nessecary estimates exactly as in <cit.> to which we refer the reader for further details. The only difference is that one needs to replace the unweighted Hölder spaces by weighted ones and to use the trivial inclusion C^k,α_δ⊂ C^k,α_0 at the right places.
Let
h∈ T_𝒮_={h∈ C^k,α_δ(S^2T^*M)|div_h=0}.
For another PE metric g∈ℛ^k,α_δ(M,), we consider the g-dependant vector field X_g satisfying
h_g=h-ℒ_X_gg∈ker(div_g), c.f. (<ref>). Then,
ℒ_X_gg_H^1≤ C g-g_C^k,α_δh_H^1.
An analogous statement was shown in <cit.> for the compact setting and the strategy is the same here, with the only difference that the statement there is formulated for the metrically equivalent 1-form and the operators used in the proof map from and to 1-forms instead of vector fields.
With respect to the (g-dependent) decomposition
H^l(TM)={∇_g f| f∈ H^l+1(M)}⊕{X∈ H^l(TM) |div_gX=0}
for l∈{0,1,2},
the operator P:X↦ -div_gℒ_Xg decomposes as
P=2Δ_g⊕ (Δ_g+(n-1))
and both operators on the right hand side are isomorphisms from H^2 to L^2. Thus,
ℒ_X_gg_H^1 ≤ C X_g_H^2≤ C PX_g_L^2≤ Cdiv_gh_L^2
≤ C(div_g-div_)h_L^2≤ C g-g_C^1h_H^1,
where the last inequality follows from a Taylor expansion argument, see also <cit.> for more details. The estimate of the lemma follows from the trivial inclusion C^k,α_δ⊂ C^1.
Now we are ready to prove the main result of this subsection.
Let the PE manifold (M,) be linearly stable and integrable. Then it is a local maximum of μ_AH,.
Recall that by Lemma <ref>, any metric g close to g is isometric to a metric g∈𝒮_ close to . Therefore it suffices to prove
μ_AH,(g)≤μ_AH,(g) for all g∈𝒮_g∩𝒰
for a sufficiently small C^k,α_δ-neighborhood 𝒰 of .
In order to prove this, let ℰ be as in Definition <ref> and let N be the orthogonal complement of _L^2(Δ_E) in
T_S_={h∈ C^k,α_δ(S^2T^*M)|div_h=0}.
Since g is integrable, ℰ is a manifold and
T_S_=T_ℰ⊕ N
Thus we find by the implicit function theorem applied to the map
Ψ: ℰ× N→{h∈ C^k,α_δ(S^2T^*M)|div_ h=0}, (g,h)↦ g+h
a C^k,α_δ-neighborhood 𝒰 such that any g∈𝒮_g∩𝒰 can be uniquely written as g=g+h with g∈ℰ and h∈ N.
Let h_g be as in Lemma <ref>. Now Taylor expansion yields
μ_AH,(g)
= μ_AH,(g)+d/dt|_t=0μ_AH,(g+th)
+1/2d^2/dt^2|_t=0μ_AH,(g+th)
+1/2∫_0^1 (1-t)^2d^3/dt^3μ_AH,(g+th)
=μ_AH,()
-1/4∫_M ⟨Δ_E,gh_g,h_g⟩_g_g+1/2∫_0^1 (1-t)^2d^3/dt^3μ_AH,(g+th)
In the second equation, we used the fact that μ_AH is constant on the manifold ℰ of its critical points. Let us now look more carefully at the term coming from the second variation.
By <cit.>, see also <cit.>, we know that Δ_E,g preserves the splitting (<ref>) according to which we have h=h_g+ℒ_X_gg. Therefore,
∫_M ⟨Δ_E,gh_g,h_g⟩_g_g=
∫_M ⟨Δ_E,gh,h⟩_g_g
-∫_M ⟨Δ_E,gℒ_X_gg,ℒ_X_gg⟩_g_g.
By Lemma <ref>,
∫_M ⟨Δ_E,gℒ_X_gg,ℒ_X_gg⟩_g_g≤ Cℒ_X_gg_H^1≤ Cg-g_C^k,α_δh_H^1
and a Taylor expansion argument (see for example <cit.>) implies
∫_M ⟨Δ_E,gh,h⟩_g_g≥∫_M ⟨Δ_E,h,h⟩__- Cg-g_C^k,α_δh_H^1.
By <cit.> we have that Δ_E,≥1/4(n-1)^2 on N. Therefore, with a suitable choice of ϵ>0, we get
∫_M ⟨Δ_E,h,h⟩__ =(1-ϵ)∫_M ⟨Δ_E,h,h⟩__+ϵ∫_M ⟨Δ_E,h,h⟩__
≥1-ϵ/4(n-1)^2h_L^2^2+ϵ∇ h_L^2^2-ϵR_g_L^∞h_L^2^2
=ϵ∇ h_L^2^2
+[1-ϵ/4(n-1)^2-ϵR_g_L^∞]h_L^2^2
≥ Ch_H^1^2
for some constant C>0. Summarizing these estimates, we have thus shown
∫_M ⟨Δ_E,gh_g,h_g⟩_g_g≥ C(1-g-g_C^k,α_δ) h_H^1^2
which implies, together with Lemma <ref>,
μ_AH,(g)
=
μ_AH,()
-1/4∫_M ⟨Δ_E,gh_g,h_g⟩_g_g
+1/2∫_0^1 (1-t)^2d^3/dt^3μ_AH(g+th) dt
≤μ_AH()-C(1-g-g_C^k,α_δ) h_H^1^2+h_H^kh_H^1^2.
Thus we get the desired statement, provided that the chosen C^k,α_δ-neighborhood 𝒰 is small enough.
§.§ Positivity of mass
Similarly as in Subsection <ref>, we denote
𝒞={g∈ℛ^k,α_δ(M,)|_g=-n(n-1)}.
Observe that if g∈𝒞, the Euler–Lagrange equation (<ref>) implies that f_g≡0, so that we have the identity μ_AH,(g)=-m_VR,(g).
Let (M,g) be an asymptotically hyperbolic manifold of constant scalar curvature -n(n-1). Then g is a critical point of μ_AH, with respect to conformal variations. Moreover, the second variation in conformal directions is given by
D_g^2μ_AH,(v· g,v· g)=-∫_M Pv· v,
where the operator P is given by
(n-1)(Δ+n)(1-1/2Δ(Δ+(n-1))^-1).
If _g=-n(n-1), f_g≡ 0 so that the L^2-gradient of μ_AH,(g) is trace-free. This proves the first assertion. For the second variation formula, we get as in the proof of Theorem <ref> that
d^2/dt^2|_t=0μ_AH,(g+th)
=-∫_M⟨1/2[Δ_Lh+ℒ_divhg-∇^2 h]+∇^2f'+(n-1)h,h⟩ e^-f.
Continuing the computation for h=v· g yields
d^2/dt^2|_t=0μ_AH,((1+tv) g) =
-1/2∫_M⟨ (Δ v)· g+(2-n)∇^2v+2∇^2f'+2(n-1)v· g,v· g⟩
=-1/2∫_M (nΔ v+(n-2)Δ v-2Δ f'+2n(n-1)v)v
=-∫_M [(n-1)(Δ v+nv)v-Δ f'· v]
For computing f' we differentiate the Euler–Lagrange equation
2Δ f+ |∇ f|^2--n(n-1)+2(n-1)f=0
which yields, because f_g≡ 0 is constant and h=v· g,
0 =2Δ f'-'+2(n-1)f'
=2(Δ+(n-1))(f')-[Δ h+div(div h)-⟨,h⟩]
=2(Δ+n-1)(f')-(n-1)(Δ+n) v.
Inserting this equation in the above yields the desired formula.
Observe that
-∫_M Pv· v≤ -n-1/2∫_M (Δ v+nv)v
=-n-1/2∫_M (|∇ v|^2+n v^2)
Let (M,) be a complete PE manifold. Then the following are equivalent:
(i) is a local maximiser of μ_AH,
(ii) For all metrics g sufficiently close to with _g+n(n-1) being nonnegative and integrable, we have m_VR,(g)≥ 0. Moreover, equality holds if and only if _g=-(n-1)g.
(iii) is a local minimizer of m_VR, on 𝒞
(iv) is a local maximiser of μ_AH, on 𝒞
For proving (i)⇒ (ii), let g be as in (ii) and let ω_g be the minimizing function in the definition of μ_AH,(g) through the functional 𝒲_AH,(g,ω). Then, because _g+n(n-1)≥ 0 and ω_g=e^-f_g/2≥0, we have
0=μ_AH,() ≥μ_AH,(g)
=
𝒲_AH,(g,ω_g)
=∫_M [4|∇ω_g|^2+(_g +n(n-1))ω_g^2]
+2(n-1)∫_M [(log(ω_g^2)-1)ω_g^2+1]-m_VR,g(g)
≥ -m_VR,(g).
Thus we have m_VR,(g)≥ 0. Moreoever, we get that m_VR,(g)=0 immediately implies μ_AH,(g)=0. Therefore, g is another local maximum of μ_AH,. In particular, it is a critical point and _g=-(n-1)g follows from Corollary <ref>.
The implication (ii)⇒ (iii) is trivial. The implication (iii)⇒ (iv) follows immediately from the fact that μ_AH,(g)=-m_VR,(g) for g∈𝒞
For proving (iv)⇒ (i), we show that every metric ∈𝒞 is a local maximum of μ_AH, in its conformal class. In fact, by Taylor expansion along the curve _t=+tv, t∈ [0,1] and using Proposition <ref> and Lemma <ref>, we obtain
μ_AH,((1+v))-μ_AH,() =
d/dt|_t=0μ_AH,(_t)+1/2d^2/dt^2|_t=0μ_AH,(_t)
+1/2∫_0^1(1-t)^2d^3/dt^3μ_AH,(_t)
≤ -n-1/4v_H^1^2+Cv_C^k,α_τv_H^1^2
≤ -(n-1/4-C v_C^k,α_τ)v_H^1^2
and the right hand side is nonpositive, provided that v is sufficiently small. Now, let g be an arbitrary metric sufficiently close to .
By Proposition <ref>, there exists a unique metric ∈𝒞∩ [g] close to g. By the assumption in (ii) and the above computation,
μ_AH,(g)≤μ_AH,()≤μ_AH,(),
which proves (i).
The above proof shows that the equivalent assertions in Theorem <ref> imply the bound
m_VR,g(g)≥inf_ω-1∈ C^∞_c(M)∫_M {4|∇ω|^2+(_g +n(n-1))ω^2+F(ω)
}_g
for all g close to with _g +n(n-1) being integrable and nonnegative. Here, F denotes the nonnegative function F(x)=2(n-1)[(log(x^2)-1)x^2+1] which vanishes exactly for x=± 1.
Let (M,) be a complete PE manifold which satisfies the equivalent assertions of Theorem <ref>. Then (M,) is scalar curvature rigid under a volume constraint in the following sense:
Any metric g on M sufficiently close to with _g≥_, for which there exists a compact set K⊂ M with
g-|_M∖ K≡ 0, (K,g)=(K,)
is isometric to .
By (<ref>), m_VR,(g)=0. By Theorem <ref> (ii), (M,g) is a complete PE manifold. Because g and are both Einstein and agree outside a compact set, they have to be isometric, see <cit.>.
Hyperbolic space is well-known to satisfy the spectral inequality Δ_E≥1/4(n-1)^2, so it is linearly stable and integrable. By Theorem <ref>, it is a local maximum of the entropy and by Theorem <ref>, it is a local minimizer of the mass.
abbrvurl
|
http://arxiv.org/abs/2307.07459v1 | 20230714163833 | The environments of hyper-compact H II regions. I. G345.0061+01.794 B | [
"Toktarkhan Komesh",
"Guido Garay",
"Aruzhan Omar",
"Robert Estalella",
"Zhandos Assembay",
"Dalei Li",
"Andrés Guzmán",
"Jarken Esimbek",
"Jiasheng Huang",
"Yuxin He",
"Nazgul Alimgazinova",
"Meiramgul Kyzgarina",
"Nurman Zhumabay",
"Arailym Manapbayeva"
] | astro-ph.GA | [
"astro-ph.GA"
] |
firstpage–lastpage
Canonical Quantization of Teukolsky fields on Kerr Background
Jochen Zahn
August 12, 2023
=============================================================
We report high angular resolution observations, made with the Atacama Large Millimeter Array in band 6, of high excitation molecular lines of and and of the H29α radio recombination line towards the G345.0061+01.794 B HC H region, in order to investigate the physical and kinematical characteristics of its surroundings. Emission was detected in all observed components of the J=14→13 rotational ladder of and in the 30_4,26-30_3,27 and 32_4,28-32_3,29 lines of . The peak of the velocity integrated molecular emission is located ∼0.4northwest of the peak of the continuum emission.
The first-order moment images and channel maps show a velocity gradient, of 1.1 arcsec^-1, across the source, and a distinctive spot of blueshifted emission towards the peak of the zero-order moment. We derived that the rotational temperature decreases from 230 Kelvin at the peak position to 137 Kelvin at its edge, indicating that our molecular observations are probing a hot molecular core that is internally excited. The emission in the H29α line arises from a region of 0.65 in size, whose peak is coincident with that of the dust continuum, has a center velocity of -18.1±0.9 and a width (FWHM) of 33.7±2.3 .
We modeled the kinematical characteristics of "central blue spot" feature as due to infalling motions, deriving a central mass of 126.0±8.7 M_⊙.
Our observations indicate that this HC H region is surrounded by a compact structure of hot molecular gas, which is rotating and infalling toward a central mass, that is most likely confining the ionized region.
ISM: molecules
—ISM: clouds
—ISM: cores
— stars: formation
—stars: massive
—ISM: kinematics and dynamics
§ INTRODUCTION
The formation of high-mass stars begins inside dense and massive molecular cores where high-mass protostellar objects accrete at rates between 10^-5 to 10^-3 M_⊙ yr^-1 <cit.>. These objects finish their Kelvin–Helmholtz (K-H) contraction very rapidly and reach the main sequence <cit.>. At this point, the star radiates extreme ultraviolet (UV) photons that ionize its surroundings, producing very small regions of ionized gas, observationally characterized by sizes ≤ 0.03 pc, densities n_e> 10^6 cm^-3, and emission measures > 10^8 pc cm^-6 <cit.>. These hyper-compact (HC) regions are thought to signpost an early stage of the evolutionary path of High-Mass Young Stellar Object (HMYSO).
Theoretical calculations show that almost half of the mass of O-type stars is accreted after the K-H contraction and the onset of ionizing radiation <cit.>. How high-mass stars keep accreting despite the onset of the ionizing radiation is not well established. Theoretical works have shown that under steady spherical accretion, radiation pressure inhibits the growth of the protostars. An effective way to circumvent the radiation and ionized gas pressure is accretion from a disk, allowing the accreting material to reach the young high-mass stars much more easily by flowing inward, mainly through the plane perpendicular to the angular momentum vector of the system <cit.>. Accretion through a disk may only choke the ionized region near the disk plane, allowing for H region development in the polar regions <cit.>. In this scenario, an HC H region should consist of an ionized biconical cavity confined by a rotating and contracting hot molecular core.
How does accretion proceed after the onset of the ionizing radiation? How does the envelope material avoid being ionized and blown away by its own pressure? To answer these questions we undertook ALMA Band 6 observations towards a set of luminous embedded HMYSOs associated with HC H regions in order to simultaneously observe molecular emission in highly excited transitions of and and emission from the ionized gas in the H29α hydrogen recombination line. These two molecules have been used to trace velocity gradients, indicative of rotation, towards hot molecular cores around luminous young high-mass stars in several cases <cit.>.
Our molecular observations are intended to assess whether or not HC H regions are associated with rotating hot molecular cores on scales of 3000 AU, as well as to detect inflow motions from the surrounding gas. Our goal is to find evidence of disk accretion and to settle the question as to whether or not accretion onto the HMYSO is maintained after stellar contraction and UV photon injection.
In this work, we present the observations towards the HC H region G345.0061+01.794 B <cit.> associated with IRAS 16533-4009. The distance to the source is 1.7 kpc <cit.>.
The Spitzer-GLIMPSE survey shows that it is associated with a bright compact MIR source prominent in the 4.5 μm band.
The paper is organized as follows: in 2 we describe the observations performed with the Atacama Large Millimeter Array (ALMA); in 3 we present the observational results; in 4 we discuss the analysis of the data, including the physical relationship between the hot molecular core and HC H region; and in 5 we present a summary of the main points addressed in this paper.
§ OBSERVATIONS
We observed, using ALMA in Band 6 (256.3-259.6 GHz), dust continuum and molecular line emission towards the HC H region G345.01 B. The observations were carried out, as part of ALMA Cycle 3, during 21 May 2016, using the 12-m array. The ALMA field of view at this wavelength is ∼22, defined as the FWHM of the primary beam. The phase center of the array was (RA, Dec) (J2000) = (16^h56^m47.59^s, -40^∘14^'25.8).
We observed 4 spectral windows in dual polarization mode. The first window was centered at the frequency of 256.302035 GHz, has a bandwidth of 1875.00 MHz and a resolution of 1.129 MHz. This setup was chosen to map the H29α radio recombination line (RRL) emission from the HC II region.
The second and third windows were centered, respectively, at the frequencies of 259.599448 and 258.388716 GHz each with 234.38 MHz bandwidths and 488.281 kHz (∼0.564 km s^-1) channels. These two setups were chosen to observe the emission from the purported hot core in two high excitation temperature lines of transitions. The fourth window, centered at the frequency of 257.325000 GHz, has a bandwidth of 468.75 MHz and a resolution of 488.281 kHz. This setup was chosen to observe the emission of CH_3CN, a good temperature probe of both the large scale diffuse gas and small scale dense gas, in the J=14-13 ladder. <cit.> point out that, based on the analysis of IRAS 16547-4247, to detect emission from the inner regions of the rotating core it is important to use molecular transitions with high upper energy levels (> 300 Kelvin). The selected transitions, 30_4,26-30_3,27 and 32_4,28-32_3,29, have upper level temperatures of 471 and 531 Kelvin, respectively, and those of the lines in the 14-13 ladder range between 92 and 670 Kelvin.
J1427-4206 was used as bandpass calibrator, J1717-3342 was used as phase calibrator, and J1617-5848 as flux calibrators.
Table <ref> lists the parameters of each spectral window, the synthesized beams and the rms noise achieved. The integration time on source was 35 minutes.
Calibration and reduction of these data were done using the Common Astronomy and Software Applications <cit.>.
§ RESULTS
§.§ Molecular emission
§.§.§
Figure <ref> presents the spectrum of the emission in the J=14→13 rotational transition of integrated over a region of 0.5 in size, centered on the G345.01 B HC H region. This rotational transition consists of 14 K components (K=0, 1, ...13; K being the projection of the total angular momentum of the molecule about the principal rotation axis of the molecule) of which ten lies within the observed spectral window (red dotted lines).
Their line frequencies, upper state energy levels and line strengths are given in Table <ref>.
Figure <ref> displays images of the zero-order moment (upper panels) and first-order moment (lower panels) of the emision in the K=2, 3, 4, 6, 7 and 8 components of the 14-13 ladder of . Moments of the K=0, 1, 5, 9 components are not shown since they are blended with each other or with other molecular lines (see Fig. <ref>).
Superimposed are contours of the continuum emission. The peak of the velocity integrated intensity emission is located ∼0.4northwest of the peak of the continuum emission. The first-order moment images show a velocity gradient from roughly east to west with average velocities
preferentially blueshifted in the West side and redshifted in the East side, and a spot of blueshifted emission towards the peak of the zero-order moment. The blue spot feature is present in all K components shown in Figure <ref>, confirming that its detection is a robust result.
Fig. <ref> presents channel maps of the emission in the K=3 component, which clearly exhibits the shift in velocity, from
blueshifted velocities in the West to redshifted velocities in the East.
In addition to the blue spot feature, the moment 1 images show a change in velocity across the source roughly along and east-west direction, with blueshifted velocities in the west and redshifted in the east. This is illustrated in Fig. <ref> which plots position-velocity diagrams of the emission in the K=0, 1, 2, 3 transitions
along a line with P.A.=255° passing through (RA, Dec) (J2000) = (16^h56^m47.59^s, -40^∘14^'26.0).
There is a clear change in velocity across the source of 4.3 over a region of 3.8 (equivalent to 135 pc^-1 at the distance of 1.7 kpc). If this velocity gradient is due to gravitationally bound rotation, it implies a dynamical mass within a 0.016 pc radius of 66 M_⊙, two times smaller than the mass of the central object as derived in Sec. 4.2. We conclude that the hot molecular gas is bound and rotating, and infalling towards the central object.
§.§.§
In addition to the high excitation lines of observed on purpose in this work, the spectral window of the RRL encompasses four other transitions of all of which have low excitation temperatures. The transitions and their parameters are listed in Table <ref>; col.(2) gives the frequency, col.(3) the energy of the upper state, col.(4) the Einstein A coefficient, and col.(5) the statistical weight of the upper state.
Figure <ref> show images of the velocity integrated intensity (upper panels) and intensity-weighted velocity (moment 1; lower panels) in all six observed lines, in order of increasing excitation temperature.
The peak position of the integrated intensity in is similar to that in the lines.
The blue spot signature is also present in the moment one maps.
§.§ Ionized gas: H29α RRL emission
Since at Band 6 frequencies the continuum is likely to be dominated by dust emission, HRLs become the most direct way to trace the ionized gas. Figure <ref> left panel shows an image of the velocity integrated H29α emission along with the dust continuum contours. The velocity range of integration is from -44 to 8 kms^-1. The position of the peak in the velocity integrated line emission, of 18.7 Jy/beam kms^-1, is coincident with that of the dust continuum.
A Gaussian fit to the observed H29α brightness distribution indicates that the HC HII region has a deconvolved angular size (FWHM) θ_s=√(0.75×0.56)≈0.65, corresponding to the geometrical mean of the deconvolved major and minor axes. At the distance of 1.7 kpc this implies a diameter of 0.0054 pc.
Figure <ref> right panel shows a spectrum of the H29α RRL emission integrated over the source. A gaussian fit the the line profile gives a linewidth of 33.7±2.3 and a line center velocity of -18.1±0.9 .
For optically thin ionized gas and local thermodynamic equilibrium (LTE) condition, the electron temperature T^*_e can be derived from the expression <cit.>:
T^*_e=[(6985/α(v,T_e))(Δ V_H29α/km s^-1)^-1(S_ff/S_H29α) (v/GHz)^1.1
×(1+N(He^+)/N(H^+))^-1]^0.87,
where S_ff is the free-free continuum flux density, S_H29α is the H29α peak flux density,
α(v,T_e) ∼ 1 is a slowly varying function <cit.>, and (NHe^+)/N(H^+)is the He^+ to H^+ abundance ratio. The free-free flux density, S_ff, cannot be derived from the continuum emission at 256 GHz because of the contribution of dust emission at this frequency. We estimate it using the parameters of the UC HII region (EM and size) derived by <cit.>, from a fit to the observed radio continuum spectra at lower frequencies, obtaining a value of S_ff=327 mJy.
Using the observed values of S_H29α=883 mJy and Δ V_H29α=33.7±2.3 adopting a value of 0.096 for the He^+ to H^+ abundance ratio (<cit.>),
we get an electron temperature T^*_e=8100±485 Kelvin.
Further parameters of the region of ionized gas can be computed using the equations presented in <cit.>. Assuming that the HII region is spherical and homogeneous, using the values of the continuum flux density at 256 GHz (327 mJy), the angular size (0.65”), electron temperature (8100 Kelvin) and distance (1.7 kpc), we determined an electron density of 2.4×10^5 cm^-3, an emission measure of 4.7×10^8 pc cm^-6, a mass of ionized gas 1.5×10^-3 M_⊙ and that the number of ionizing photons required to excite the HII region is 1.4×10^47s^-1.
§ DISCUSSION
§.§ Rotational Temperature of
We used the population-diagram method <cit.> to obtain the rotation temperature (T_rot) assuming LTE and low optical depths. The column density in the (J, K) state, N_JK, is given by
(N_JK/cm^2)= 1.67× 10^14g_JK/S(I, K)J/(J^2-K^2)(ν_0/GHz)^-1
×(μ/debye)^-2(∫ T_Bdv/K km s^-1)
where g_JK is the statistical weight of the state,
S(I, K) is the spin weight degeneracy factor (see <cit.>,
ν_0 is the frequency of the (J , K) → (J-1, K) transition, μ = 3.91 debye, and T_B is the brightness temperature.
N_JK and the total column density of , N_CH_3CN, are related, through the Boltzmann equation, by the expression
ln(N_JK/g_JK)=ln[N_CH_3CN/Q_int(T_rot)] -E_JK/kT_rot
where T_rot is the rotational temperature, E_JK is the energy level of the (J, K) state, and Q_int is the partition fuction.
If more than two transitions are observed, the rotational temperature can be derived from a least-squares linear fit of the ln(N_JK/g_JK) versus E_JK/k data.
The total column density can be also derived once the partition function is known, which for is given by
<cit.>:
Q_int(T_rot)=3.89T_rot^1.5/(1-e^-524.8/T_rot)^2
where T_rot is in Kelvin.
Figure <ref> displays rotational diagrams of the emission at the peak position (blue spot) and from five half-rings (since the emission is not radially symmetric) at different radial distances from the peak. The rings are centered at the peak position and have widths of 0.15. The inner ring (R1) have an inner radius of 0.33. The rotational temperature decreases outwards with distance from the blue spot (exciting star) being 230 Kelvin at the peak position and 137 Kelvin at the edge of the molecular () structure (∼0.01 pc). Figure <ref> plots the computed rotational temperature versus the projected distance from the blue spot. A power law fit to the observed dependence gives T_rot = 126 r^-0.44.
This result suggests that the molecular gas is heated via collisional excitation with hot dust, which in turn is heated by the absorption of radiation emitted by the central star <cit.>. Using expression (11) in <cit.>, we infer that the power-law index of dust emissivity at far infrared wavelengths, β, is 0.55 and that the luminosity of the central object is 1.1×10^4 L_⊙. This luminosity is in good agreement the luminosity of the star ionizing the HC HII region as determined by <cit.> from radio continuum observations.
§.§ Rotational Temperature of
From the emission in the four low excitation lines of we estimate the rotational temperature of the envelope using the standard rotational diagram analysis <cit.>.
Figure <ref> plots ln(γ W/g_u) versus E_u, where W is the velocity integrated intensity,
γ_u is equal to 8π k ν^2/hc^3 A_ul,
k is the Boltzmann constant, ν is the transition
frequency, h is the Plank constant, c is the speed of light, and A_ul is the Einstein A-coefficient.
A linear fit to the observed trend implies a temperature of 38.4 Kelvin.
§.§ Infall motions
The blue spot feature mentioned above (see §<ref>) is a clear signature of infall <cit.>.
The central region of the first-order map appears blueshifted because the blueshifted emission, coming from gas close to the central stellar object and behind it, is stronger than the redishifted emission from gas farther away and in front the stellar object.
This asymmetry is produced when the optical depth is high enough so that at a given line-of-sight velocity the gas facing the observer hides the emission from the gas behind it <cit.>.
At larger distances from the center, the integrated intensity decreases, the blue and redshifted intensities become similar,
and the intensity-weighted mean velocity approaches the systemic velocity of the cloud. Therefore, the first-order moment
of an infalling envelope is characterized by a compact spot of blueshifted emission toward the position of the zeroth-order
moment peak.
In order to determine the infall velocity, central mass and infall radius we use the hallmark model of <cit.>.
The value of the first-order moment as a function of the angular distance was obtained for the unblended K components of 14-13 transition of by averaging the first-order moment in concentric rings of width 0.05centered on the average position of the peak of the blue spot, α(J2000)=16^h56^m47^s.54, δ(J2000)=-40°1425.879. The first-order moment profiles of the different components are presented in Figure <ref>. They seem to lie into two separate groups, with the μ values for the K=2, 3, 4 components being higher than those of the K=6, 7, 8 components, especially near the peak position. However, the K=7 component follows the K=2, 3, 4 components after 0.5distance from the peak.
The difference is likely due to the higher K lines probing the hotter gas close to the HC H region.
The best fit is obtained for an infall radius much larger than the beam size, an ambient gas velocity of -12.46±0.16 , and a central mass of 126.0±8.7 M_⊙.
As we discussed above the position of the central blue spot is located in the SW side of the HC H region.
Regarding the first-order moment map, <cit.> explores how the central-blue-spot hallmark of an infalling core is modified by the presence of rotation.
He finds that rotation makes the central-blue-spot even bluer and moves it off the center toward the half of the core where rotation tends to shift
velocities to the blue.
The clear detection of the "central blue spot" signature in G345.0061+01.794B HC H region
indicates that infall motions play a fundamental role in the gas kinematics of this source.
§ SUMMARY
We carried out high angular resolution observations, using ALMA, of emission in highly excited molecular lines of and and in the H29α radio recombination line towards the G345.0061+01.794 B HC H region. The main results and conclusions are summarized as follows:
* Emission was detected in all ten observed K components of the J=14→13 rotational ladder of and in the 30_4,26-30_3,27 and 32_4,28-32_3,29 lines of . The peak of the velocity integrated molecular line intensity is located slightly NW (about 0.4) of the peak of the continuum emission.
* The first-order moment images of the molecular emission show a central spot of blueshifted emission, with respect to systemic velocity of the cloud, located at the peak of the zero-order moment, seen in all K components of and in the lines.
* Rotational diagrams of the emission the methyl cyanide lines show that the rotational temperature has a peak value of 230 Kelvin at the position of the blue spot and decreases outwards reaching a value of 137 Kelvin at the edge (∼1) of the molecular structure, indicating that our observations are probing a hot molecular core that is internally excited.
In addition, from the emission in the four low excitation lines of we estimate the rotational temperature of the envelope of 38.4 Kelvin.
* The first-order moment images and channel maps of the molecular emission also show a velocity gradient from roughly east to west with average velocities preferentially blueshifted in the West side and redshifted in the East side. The change in velocity amounts to 4.3 over a region of 3.8 (equivalent to 135 pc^-1 at the distance of 1.7 kpc).
* Emission was detected in the H29α line, having a line center velocity of -18.1±0.9 and a linewidth (FWHM) of 33.7±2.3 . The position of the peak in the velocity integrated emission is coincident with that of the dust continuum. The radio recombination line observations indicate that the ionized gas emission arises from a region having a radii of 0.0027 pc, a mass of ionized gas of 1.5×10^-3 M_⊙, an electron temperature of 8100±485 Kelvin, an emission measure of 4.7×10^8 pc cm^-6, and an electron density 2.4×10^5 cm^-3.
* We modeled the kinematical characteristics of "central blue spot" feature as due to infalling motions, deriving a central mass of 126.0±8.7 M_⊙.
We conclude that this HC H region is surrounded by a compact structure of hot molecular gas, which is rotating and infalling toward a central mass of 126.0±8.7 M_⊙, that is most likely confining the region of ionized gas.
§ ACKNOWLEDGEMENTS
This research was funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan (Grant Nos. AP13067768 and AP14870504) and sponsored (in part) by the Chinese Academy of Sciences (CAS), through a grant to the CAS South America Center for Astronomy (CASSACA) in Santiago, Chile. GG acknowledges support from ANID BASAL project FB210003.
RE acknowledges partial financial support from the grants PID2020-117710GB-I00 and CEX2019-000918-M funded by MCIN/ AEI /10.13039/501100011033.
JE acknowledges support from the National Key R&D Program of China under grant No.2022YFA1603103 and the Regional Collaborative Innovation Project of Xinjiang Uyghur Autonomous Region grant 2022E01050.
DL acknowledges support from National Natural Science Foundation of China (NSFC) through grant No. 12173075 and support from Youth Innovation Promotion Association CAS.
YH acknowledges support from the CAS "Light of West China" Program under grant No. 2020-XBQNXZ-017 and the Xinjiang Key Laboratory of Radio Astrophysics under grant No. 2023D04033.
§ DATA AVAILABILITY
The data underlying this article are available in the article.
mnras
|
http://arxiv.org/abs/2307.04558v1 | 20230710134704 | On an uncertainty result by Donoho and Stark | [
"Oriol Baeza-Guasch"
] | math.FA | [
"math.FA",
"math.CA"
] |
#2#3
2.5cm2.5cm1cm1.5cm
=2ex
theoremTheorem
*theorem*Theorem
propositionProposition
lemmaLemma
corollaryCorollary
conjectureConjecture
claimClaim
definition
definitionDefinition
exampleExample
exerciseExercise
remarkRemark
equationsection
psmallmatrix
([ )
On an uncertainty result by Donoho and Stark
Oriol Baeza Guasch
Universitat Politècnica de Catalunya
0.85
Abstract. In the work of Donoho and Stark <cit.>, they study a manifestation of the uncertainty principle in signal recovery. They conjecture that, for a function with support of bounded size T, the maximum concentration of its Fourier transform in the low frequencies [-W/2, W/2] is achieved when the support of the function is an interval. In <cit.>, they are able to prove a positive result under the extra assumption that WT≤ 0.8, using an inequality with symmetric rearrangements. In our work, we present a more elementary proof of their result, while also relaxing the required bound to WT ≤ 1.
Finally, we also study a discrete version of the problem, by considering complex polynomials and their concentration on subsets of the unit circle, and we prove an analogous problem. Lastly, this result is used to improve an inequality by Montgomery, appearing in <cit.>.
§ INTRODUCTION
To state the original conjecture by Donoho and Stark, we must introduce the following operators, defined for any f∈ L^2(ℝ). First, the time-limiting operator for a given measurable subset
(P_ f)(t) = f(t) t∈,
0 otherwise
and second the frequency-limiting operator for a given measurable subset
(P_ f)(t) = ∫_ e^2π i w tf̂(w) dw
where f̂ is the Fourier transform of f, with the convention
f̂ (w) =∫_ℝ f(t) e^-2π i w t dt
Then, their conjecture can be stated as follows.
The supremum sup ||P_ P_||, where is an interval and ranges over measurable subsets with fixed measure, is attained when is also an interval.
In <cit.>, they are able to prove a positive result under the extra assumption given by the bound WT ≤ 0.8, where this quantities refer to the sizes of the subsets: W = || and T = ||.
However, we will rather work with a symmetric formulation of the statement, where we consider sup ||P_ P_ || instead. This will result more convenient for our work, given the interpretation that will be presented later, while being equivalent to the original conjecture.
The norms ||P_ P_ || = ||P_ P_ || are equal. In particular, both formulations of the conjecture are equivalent.
The proof is immediate after the observation that each of the operators P_ and P_ are self-adjoint, so the adjoint operator of P_ P_ is precisely P_ P_. As the norm of an operator in a Hilbert spaces is the same as its adjoint, the conclusion follows.
Next, we will give an interpretation of this symmetric formulation that motivates the result that we will rather prove. For that, we introduce the concentration operator, for a measurable set ⊆ℝ and f∈ L^2(ℝ) we define
c_ (f) = ∫_ |f(t)|^2 dt∫_ℝ |f(t)|^2 dt
Now, because ||P_|| =1 it follows straightforwardly that
||P_ P_|| = sup_f∈ L^2 : f = P_ f||P_ f||/|| f|| = sup_f∈ L^2: f = P_ f∫_ |f(t)|^2 dt ∫_ℝ |f(t)|^2 dt = sup_f∈ L^2: f = P_ f c_(f)
which we might interpret as calculating the concentration for a function f in the measurable set , and restricting our attention to functions whose Fourier transform have support in .
Therefore, altogether the main result that we will prove in this work is
Let W and T be real numbers such that WT ≤ 1. Then, for all measurable subsets of the real numbers with size || = T, and functions f whose Fourier transform has support f̂ = [-W/2,W/2], the following inequality is true
∫_ |f(t)|^2 dt ≤∫_-T/2^T/2| g(t) |^2 dt
where g is the function given by the inverse Fourier transform of |f̂|. In particular, by denoting 𝕀 = [-T/2,T/2], it holds c_(f) ≤ c_𝕀 (g).
Which by the previous reasoning improves the result by Donoho and Stark, by relaxing the bound required.
On the other hand, the difference in interpretation for the original conjecture is that there the concentration is computed in the frequency domain (rather than the temporal domain ). Nonetheless, there is a similarity in the approaches for the proof: to modify the function f to obtain a function with higher concentration, but with support on a single interval of the same size. In particular, in their proof the improvement in concentration is given by |f|^* the symmetric decreasing rearrangement, which is defined as,
μ_f ( α) = | { t : f(t) ≥α}| < ∞ f^*(1/2 μ_f(α)) = f^*(-1/2 μ_f(α)) = α
That is, the symmetric function decreasing about the origin that has the same measure of its level sets, and which will be supported on the interval [-| f|/2, | f|/2].
The use of symmetric rearrangements is motivated in their proof because it allows to use the following lemma, by Hardy, Littlewood and Pólya.
Let f,g and h be positive functions. Then,
∫_ℝ∫_ℝ f(x) g(y) h(x-y) dxdy≤∫_ℝ∫_ℝ f^*(x) g^*(y) h^*(x-y) dxdy
Their additional restriction WT ≤ 0.8 arises here, as they require that | t|^* = t. On the other hand, our restriction will arise from imposing only |sin t| = sin t in the domain, resulting in a less restrictive bound of WT ≤ 1.
Finally, also worth mentioning that when restricting to both and intervals, the concentration operator has been widely studied. In particular, the functions maximizing the concentration are called the Prolate Spheroidal Wave Functions, with their characterization described with detail in the work of Slepian, Pollak and Landau <cit.>, among others.
§ PREVIOUS LEMMAS
We first prove a useful inequality which takes advantage of the concavity/convexity of the sine function, together with the known inequalities by Jensen and Karamata.
Let n≥1 be an integer and L a fixed real number. Then, for all real numbers x_1, x_2, …, x_n such that x_1 + x_2 + … + x_n = L, the expression
| sin(x_1) + sin(x_2) + … + sin(x_n) |
achieves its maximum when #{y_1, y_2, …, y_n}≤ 2, where the y_i are real numbers in [0,2π) such that y_i ≡ x_i 2π. In other words, all x_i leave the same remainder modulo 2π except maybe one.
The idea of the proof is taken from a similar result for concave-convex functions in (Cirtoaje, 2006) <cit.>.
Let y_i ∈ [0, 2π) be such that y_i ≡ x_i 2π. It is clear that
sin(x_1) + … + sin(x_n) = sin(y_1) + … + sin(y_n)
Now, suppose (<ref>) achieves a maximum at point (y_1, …,y_n) and write without loss of generality y_1 ≤ y_2 ≤…≤ y_n. Suppose for the sake of contradiction that y_1 < y_n-1, and we distinguish two cases.
Case 1. If y_n-1≤π, using that sin(y) is a concave function in [0,π], we have by Jensen's inequality
sin(x_1) + sin(x_n-1) = sin(y_1) + sin(y_n-1)
< 2 sin(y_1 + y_n-1/2)
= sin(y_1 + y_n-1 - y_1/2) + sin(y_n-1 - y_n-1 - y_1/2)
= sin(x_1 + y_n-1 - y_1/2) + sin(x_n-1 - y_n-1 - y_1/2)
with the inequality being strict since the variables are different, contradiction.
Case 2. If y_n-1 > π and y_n + y_n-1-π < 2π, using that sin(y) is a strictly convex function in (π, 2π), we have by Karamata's majorization inequality that
sin(x_n-1) + sin(x_n) = sin(y_n-1) + sin(y_n)
< sin(π) + sin(y_n + y_n-1 - π)
= sin(y_n-1 - (y_n-1 - π)) + sin(y_n + (y_n-1 - π) )
= sin(x_n-1 - (y_n-1 - π)) + sin(x_n + (y_n-1 - π) )
with the inequality being strict since y_n-1 > π, contradiction.
Case 3. If y_n-1 > π and y_n + y_n-1-π≥ 2π, using that sin(y) is a convex function in (π, 2π], we have by Karamata's majorization inequality that
sin(x_n-1) + sin(x_n) = sin(y_n-1) + sin(y_n)
< sin(y_n + y_n-1 - 2π) + sin(2π)
= sin(y_n-1 + y_n) + sin(y_n - y_n)
= sin(x_n-1 + y_n) + sin(x_n - y_n )
with the inequality being strict since y_n < 2π.
Altogether, we conclude that y_1 = y_n-1.
Now, we should study the maximum of
-( sin(x_1) + … + sin(x_n) )
but using that sin is an odd function, we might just apply the previous argument to variables -x_i, and a similar conclusion is reached.
In particular, the previous result serves to prove the main lemma required for the proof of the theorem.
Let r≥ 1 be a positive integer and L a fixed real value. Then, for all real variables A_1,A_2, …, B_r with ∑_p=1^r B_p - A_p = L it holds that
( ∑_p=1^r sin B_p - sin A_p)^2 + ( ∑_p=1^r cos B_p - cos A_p )^2≤ 4 sin^2 (L/2)
To begin, define for simplicity
h(A_1,A_2,…,B_r) ( ∑_p=1^r sin B_p - sin A_p)^2 + ( ∑_p=1^r cos B_p - cos A_p )^2
The key observation is that the value of the expression h in (<ref>) is independent of a shift of the variables [This observation arises more naturally during the proof of <ref>, for the case where the subset is a finite disjoint union of intervals. There, the A_p,B_p will be basically the endpoints of these intervals, so it is not surprising that the expression presented above depends only on the sizes of the intervals and their relative position and is therefore invariant by a shift.]
h(A_1,A_2,…,B_r) = h(A_1+s,A_2+s,…,B_r+s) ∀ s ∈ℝ
This can be easily seen after the following manipulation
h(A_1,A_2,…,B_r) = ( ∑_p=1^r sin B_p - sin A_p)^2 + ( ∑_p=1^r cos B_p - cos A_p )^2
= ∑_p,q[ sin A_p sin A_q + sin B_p sin B_q; +cos A_p cos A_q + cos B_p cos B_q ] - 2∑_p,q[ sin B_p sin A_q; + cos B_p cos A_q ]
= ∑_p,qcos(A_p-A_q) + cos(B_p-B_q) - 2cos(A_q-B_p)
In particular, this shifting will be of importance since it will allow us to restrict our attention when looking for the maximum of h to a smaller subset of variables satisfying an additional constraint.
Now, suppose that h achieves its maximum value under the given constraint ∑_p=1^r B_p - A_p = L at a point (A_1^*,A_2^*,…,B_r^*). Also, it is clear that for all points we have
∑_p=1^r cos B_p - cos A_p = - ( ∑_p=1^r cos (B_p+π) - cos (A_p+π) )
and since ∑_p=1^r cos (B_p+s) - cos (A_p+s) is a continuous function on s, there exists s^* ∈ [0,π) such that
∑_p=1^r cos (B_p^*+s^*) - cos (A_p^*+s^*) = 0
Therefore, for all points (A_1,A_2,…,B_r) where ∑_p=1^r B_p-A_p = L it holds
0 ≤ h(A_1,A_2,…,B_r) ≤ h(A_1^*, A_2^*, …, B_r^*)
= h(A_1^*+s^*, A_2^*+s^*, …, B_r^*+s^*)
= ( ∑_p=1^r sin (B_p^*+s^*) - sin (A_p^*+s^*) )^2 + 0
≤max_{( ∑_p=1^r sin (B_p) + sin (-A_p) )^2 }
where the set over which we are maximizing is
= {-A_1,-A_2,…,B_r | ∑_p=1^r B_p -A_p = L
}
However, by <ref> it is immediate that the last expression in (<ref>) is maximized when all variables B_p and -A_q are equal modulo 2π except maybe one, say without loss of generality that it is B_r. Thus, we can write
max_{(∑_p=1^r sin (B_p) + sin (-A_p) )^2 } = max_'{(∑_p=1^r sin (B_p) + sin (-A_p) )^2 }
where the new set is
' = { -A_1,-A_2,…,B_r | [ ∑_p=1^r B_p -A_p = L; -A_1 ≡ -A_2 ≡…≡ B_r-12π ]}⊆
But even more, combining (<ref>) and (<ref>) we have
max_ h(A_1,A_2,…,B_r) ≤max_'{(∑_p=1^r sin (B_p) + sin (-A_p) )^2 }
≤max_'{(∑_p=1^r sin B_p - sin A_p )^2 + (∑_p=1^r cos B_p - cos A_p )^2}
= max_' h(A_1,A_2,…,B_r)
where the inequality comes from adding a non-negative term to the expression.
Now, using again that h is invariant by a shift of the variables, we can assume that A_1= 0. Then, taking into account 0 = -A_1 ≡ -A_2≡…≡ B_r-1 we have sin A_1 = sin A_2 = … = sin B_r-1 = 0, and also cos B_p = cos (-A_p) =cos A_p for all indices p<r. Therefore, continuing (<ref>) we deduce
max_ h(A_1,A_2,…,B_r) ≤max_' ∩{A_1 = 0}{( sin B_r - sin A_r_=0)^2 + ( cos B_r - cos A_r )^2}
=4 max_' ∩{A_1 = 0}{sin^2 ( B_r-A_r/2) }
Finally, using again the equivalences 0 =-A_1 ≡ -A_2≡…≡ B_r-12π and the sum constraintwe have
L = ∑_p= 1^r B_p-A_p = B_r-A_r - 2π k B_r-A_r/2 = π k + L/2
for some integer k, and hence
max_'∩{A_1=0}{sin^2 ( B_r-A_r/2) } = sin^2 ( π k + L/2 ) =sin^2 ( L/2 )
Therefore, combining (<ref>) and (<ref>) the conclusion is immediate
max_( ∑_p=1^r sin B_p - sin A_p)^2 + ( ∑_p=1^r cos B_p - cos A_p )^2≤ 4 sin^2 (L/2)
Notice that equality can be achieved for example at point (A,A,…,A,A+L), where h evaluates to
( sin(A+ L) - sin A )^2 +( cos(A+ L) - cos A )^2 = 4sin^2 (L /2 )
In particular, these points of equality will correspond later, during the proof of the theorem, with the cases where is a single interval.
§ PROOF OF THE THEOREM
Now, we are ready to introduce the proof of the theorem, which will be divided in two parts. Firstly, showing that the statement for the case where the support of the function is = ⊔_p=1^r (a_p,b_p) a disjoint union of intervals. Second, we will see that this implies the result in the case of a general measurable subset, by using the regularity of the Lebesgue measure.
Let W and T be real numbers such that WT ≤ 1. Then, for a disjoint union of intervals = ⊔_p=1^r (a_p,b_p) with total size || = ∑_p=1^r b_p-a_p = T, and functions f whose Fourier transform has support f̂ = [-W/2,W/2], the following inequality is true
∫_ |f(t)|^2 dt ≤∫_-T/2^T/2| g(t) |^2 dt
where g is the function given by the inverse Fourier transform of |f̂|. In particular, by denoting 𝕀 = [-T/2,T/2], it holds c_(f) ≤ c_𝕀 (g).
Consider = ⊔_p=1^r (a_p,b_p) a finite disjoint union of intervals with total size T. Now, by a straight-forward computation, we have that for f with f̂ = [-W/2,W/2] it holds
∫_ |f|^2 dt = ∑_p=1^r ∫_a_p^b_p|f(t)|^2 dt
= ∑_p=1^r ∫_a_p^b_p∫_-W/2^W/2∫_-W/2^W/2f̂(ω) f̂(η) e^2π i (η-ω) t dη dω dt
= ∫_-W/2^W/2∫_-W/2^W/2f̂(ω) f̂(η)/η-ω∑_p=1^r 1/2π i( e^2π i (η-ω) b_p - e^2π i (η - ω) a_p) dη dω
= 1/2π∫_-W/2^W/2∫_-W/2^W/2f̂(ω) f̂ (η)/|η - ω|∑_p=1^r [ [ sin(2π |η-ω| b_p) - sin(2π |η-ω| a_p) ]; -i [ cos(2π |η-ω| b_p) - cos(2π |η-ω| a_p) ] ]
dη dω
≤1/2π∫_-W/2^W/2∫_-W/2^W/2 |f̂(ω) f̂(η)|/|η - ω|[
( ∑_p=1^r sin B_p - sin A_p )^2 + ( ∑_p=1^r cos B_p - cos A_p )^2 ]^1/2 dη dω
where we used the triangular inequality and trigonometric identities. Also, we denote B_p = 2π |η-ω| b_p and A_p = 2π |η - ω| a_p to ease the notation, and we have the understanding that whenever η = ω, the terms evaluate to the fix value
1/|η - ω|[
( ∑_p=1^r sin B_p - sin A_p )^2 + ( ∑_p=1^r cos B_p - cos A_p )^2 ]^1/2 =
= [ ( ∑_p=1^r 2π b_p - 2π a_p )^2 + 0 ]^1/2 = 2π T
which only depends on the size of .
Now, by the previous observation when η = ω, and using <ref> for the case when η≠ω, we have that under the constraint given by the total size of the intervals ∑_p=1^r B_p- A_p = 2π |η-ω|∑_p=1^r b_p-a_p = 2π |η-ω| T, it holds that
1/|η - ω| [
( ∑_p=1^r sin B_p - sin A_p )^2 + ( ∑_p=1^r cos B_p - cos A_p )^2
]^1/2≤2/|η-ω||sin( π |η-ω| T)|
So substituting in (<ref>) we have
∫_ |f|^2 dw ≤1/π∫_-W/2^W/2∫_-W/2^W/2 |f̂(ω) f̂(η)|/|η - ω||sin( π |η-ω| T )| dη dω
Now, by hypothesis WT ≤ 1, so it holds that
π |η - ω| T ≤π WT ≤π
Thus, we can ignore the absolute value around sin, and we have
∫_ |f|^2 dt ≤1/π∫_-W/2^W/2∫_-W/2^W/2 |f̂(ω) f̂(η)|/|η - ω|sin( π |η-ω| T ) dη dω
= ∫_-W/2^W/2∫_-W/2^W/2 |f̂(ω) f̂(η)| ∫_-T/2^T/2 e^2π i (η-ω) t dt dη dω
= ∫_-T/2^T/2∫_-W/2^W/2∫_-W/2^W/2 |f̂(ω)| e^ -2π i ω t|f̂(η)| e^-2π i η t dη dω dt
= ∫_-T/2^T/2 |g|^2 dt
where we define g to be the function given by the inverse Fourier transform of |f̂|, and f̂ is itself the Fourier transform of f.
It is clear that g = f̂ = [-W/2, W/2], and also that ||g|| = ||f|| since by Plancherel
∫_ℝ |f|^2 dt = ∫_ℝ |f̂|^2 dw = ∫_-W/2^W/2 |f̂|^2 dw =∫_-W/2^W/2 |g|^2 dw = ∫_ℝ|g|^2 dw = ∫_ℝ |g|^2 dt
so we get the desired conclusion and c_(f) ≤ c_𝕀 (g), where 𝕀=[-T/2,T/2].
Now, we extend this result to any measurable subset.
Let W and T be real numbers such that WT ≤ 1. Then, for all measurable subsets of the real numbers with size || = T, and functions f whose Fourier transform has support f̂ = [-W/2,W/2],the following inequality is true
∫_ |f(t)|^2 dt ≤∫_-T/2^T/2| g(t) |^2 dt
where g is the function given by the inverse Fourier transform of |f̂|. In particular, by denoting 𝕀 = [-T/2,T/2], it holds c_(f) ≤ c_𝕀 (g).
Let be a measurable subset of the real numbers with measure || = T, and f a function whose Fourier transform has limited support f̂ = [-W/2,W/2] and for which the statement does not hold. Multiplying f by a scalar will not affect the concentration, so assume without loss of generality that f = 1.
Therefore, we suppose, for the sake of contradiction, that
c_ (f) = ∫_ |f(t)|^2 dt > ∫_𝕀 |g(t)|^2 dt = c_𝕀 (g)
where g is the function given by the inverse Fourier transform of |f̂|.
Next, it is well-known (see, for example, Theorem 3.4 in (Stein and Shakarchi, 2009) <cit.>) that for a measurable set with finite measure, for every ε > 0 there exist finitely many disjoint finite intervals 𝕁_1, …, 𝕁_r ⊆ℝ such that |Δ⋃_k=1^r 𝕁_k | < ε. Here, Δ refers to the symmetric difference of two sets, A Δ B (A ∖ B) ∪ (B ∖ A).
Now, construct a sequence of measurable subsets which can be expressed as a finite disjoint union of finite intervals {_n }_n≥ 1, and such that
|_n Δ| ≤1/n
Then, if we define the interval 𝕀_n = (-| _n|/2, | _n|/2) we know by <ref> that
∫__n |f(t))|^2 dt ≤∫_𝕀_n |g(t)|^2 dt
Also, it is clear that
∫__n |f(t)|^2 dt n⟶∫_ |f(t)|^2 dt
∫_𝕀_n |g(t)|^2 dt n⟶∫_𝕀 |g(t)|^2 dt
Therefore, by Fatou's lemma and the inequality in (<ref>) we have
∫_ |f(t)|^2 dt ≤lim inf∫__n |f(t)|^2 dt ≤lim inf∫_𝕀_n |g(t)|^2 dt = ∫_𝕀 |g(t)|^2 dt
This clearly contradicts our assumption (<ref>), as we wanted to show.
Lastly, again since the norm of functions f and g is the same, we conclude c_(f) ≤ c_𝕀(g).
§ DISCRETE VERSION, IMPROVING MONTGOMERY'S RESULT
Finally, we introduce a discrete version of the problem, which is solved using the same inequalities, and which also requires a similar additional bound.
Let us consider polynomials of degree n≥ 1 and complex coefficients, that is P ∈ℂ_n[z]. Moreover, we will restrict our attention to measurable subsets of the unit circle 𝕋, which we will represent by their arguments as a complex number.
To follow the notation of the previous work, for a measurable Ω⊆𝕋 and P ∈ℂ_n[z], let us denote
c_Ω (P) = ∫_Ω |P(z)|^2 (z)∫_𝕋 |P(z)|^2 (z)
Where (z) is the Lebesgue measure on the unit circle, normalized to 2π. In particular, the measure of a measurable subset Ω⊆𝕋, which we may write as Ω = {e^iθ : θ∈Θ}, is given by
|Ω| = ∫_Ω(z) = ∫_Θ dθ
Also, we will consider the norm of the polynomial P(z) = a_0 + a_1z + … + a_n z^n to be
P = ∫_𝕋 |P(z)|^2 (z) = ∫_0^2π |P(e^iθ)|^2 dθ =2π( |a_0|^2 + |a_1|^2 + … + |a_n|^2 )
Then, the analogous to Conjecture 1 in this case is the following.
Fix n≥ 1 an integer and δ > 0. Then, among all measurable subsets Ω of the complex unit circle with measure |Ω| = 2δ, the maximum of the concentration operator is attained on an interval 𝕀 of this same length. That is,
sup_P ∈ℂ_ n[z]
|Ω| = 2δ c_Ω (P) = sup_P ∈ℂ_n[z] c_𝕀 (P)
And using an analogous approach, we will be able to prove a positive result once the additional hypothesis that nδ≤π is added, which will serve a similar purpose as WT≤ 1 required in the continuous version.
In some sense, we can relate the size of the subset |Ω| = 2δ with T the size of the support in the time domain of f, and the degree n of P with W the size of the frequency domain of f̂.
Lastly, notice that the position of the interval in the unit circle is not relevant, since for a given P(z) = a_0 + a_1 z + a_2 z^2 + … + a_n z^n and 𝕀 = (-δ,δ), we can take Q(z) = a_0 + a_1 e^-iθ z + a_2 e^-i 2θ z^2 + … + a_n e^-i n θ z^n and 𝕁 = (θ-δ, θ+δ) and it holds that c_𝕀 (P) = c_𝕁 (Q). Therefore, we will be considering 𝕀 = (-δ, δ).
In particular, the positive result we prove is the following.
Let Ω be a measurable subset of the complex unit circle, and let P(z) = a_0 + a_1z + … + a_n z^n be any polynomial of degree n≥ 1.
Denote |Ω| = 2δ and suppose it holds that n δ≤π. Then, taking the interval 𝕀 = (-δ,δ) and the polynomial Q(z) = |a_0| + |a_1|z + … + |a_n|z^n, the following inequality is true
∫_Ω |P(z)|^2 (z) ≤∫_𝕀 |Q(z)|^2 (z)
In particular, c_Ω(P) ≤ c_𝕀 (Q).
The proof is analogous to the original continuous problem. Again, we must work out first the case where the subset is a finite disjoint union of intervals, and later extend it to any measurable subset of the unit circle using the regularity of the Lebesgue measure.
Therefore, we will only include here the relevant details of the first part. First, by a straight-forward computation we have
∫_Ω |P(z)|^2 (z) = ∫__p=1^r (α_p, β_p) |P(e^iθ)|^2 dθ
= 2 ∑_l,m = 0^n a_l a_m ∑_p=1^r e^i (l-m) β_p+α_p/2 sin( (l-m) β_p - α_p/2)/l-m
= 2 ∑_l,m = 0^n a_l a_m/l-m ∑_p=1^r sin( (l-m) β_p - α_p/2)
·[ cos( (l-m) β_p+α_p/2) + i sin( (l-m) β_p+α_p/2) ]
= ∑_l,m = 0^n a_l a_m/|l-m| [ ( ∑_p=1^r sin( |l-m| β_p) - sin( |l-m| α_p ))
- i ( ∑_p=1^r cos( |l-m| β_p) - cos( |l-m| α_p ) ) ]
≤∑_l,m = 0^n |a_l| |a_m|/|l-m| [ ( ∑_p=1^r sin B_p - sin A_p)^2 + ( ∑_p=1^r cos B_p - cos A_p )^2 ]^1/2
Now, by <ref> and using the fact that 0 ≤ |l-m|δ≤ n δ≤π by hypothesis, it holds
∫_Ω |P(z)|^2 (z)
≤ 2 ∑_l,m = 0^n |a_l| |a_m|/|l-m|| sin( |l-m| δ) |
= 2 ∑_l,m = 0^n |a_l| |a_m| /|l-m| sin( |l-m| δ)
= ∫_𝕀 |Q(z)|^2 (z)
where 𝕀 = (-δ, δ) is a single interval of size 2δ = |Ω|, and Q(z) = |a_0| + |a_1|z + … + |a_n|z^n is the polynomial whose coefficients are the norms of the coefficients of P.
Notice that the norm of the new polynomial is the same as the norm of the original one,
∫_0^2π |P(e^iθ)|^2 dθ = 2π( |a_0|^2 + |a_1|^2 + … + |a_n|^2 ) = ∫_0^2π |Q(e^iθ)|^2 dθ
and therefore c_Ω(P) ≤ c_𝕀(Q).
Now, we will use this to also improve an inequality result by Montgomery, when adding the hypothesis nδ≤π. The result in question appears in <cit.>, where he presents a similar inequality to what we have obtained but with an extra factor, and which only applies (in our context) to symmetric polynomials of even degree. Our improvement is then reducing this factor from 20 to 1, which is actually the best possible, and extending it to any polynomial when the condition nδ≤π is added.
We might directly state his result in our context, by rather taking 𝕋 = ℝ/ 2πℤ and the functions φ_k = cos k x. Nonetheless, it should be mentioned that Montgomery's result applies in a more general setup for sets of functions {φ_k} which are uniformly bounded and satisfy a Bessels' type inequality.
[Thm. 1, <cit.>]
Let f(x) = ∑_k=0^∞ a_k cos k x, and define
f^**(x) = ∑_k=0^∞ a_k^* cos k x
where the a_k^* are the numbers |a_k|, permuted so that {a_k^*}_k=0^∞ is a decreasing sequence. Then for any measurable set Ω⊆𝕋, with measure |Ω| = 2δ we have
∫_Ω |f|^2 ≤ 20 ∫_-δ^δ |f^**|^2
Where this f^** rearrangement can be understood as a discrete version of the symmetric rearrangement f^* presented in the introduction.
Notice, however, that for f(x) = ∑_k=0^n a_k cos k x it holds that
f(x) = a_0 + ∑_k=-n
k≠ 0^na_|k|/2 e^i k x = e^-i n x( a_0 e^i n x + ∑_k=0
k≠ n^2na_|k-n|/2 e^i k x)
so it is natural to consider the polynomial
P(z) = 1/2(a_n + a_n-1 z^1 + … + a_1 z^n-1 + 2a_0 z^n + a_1 z^n+1 + … + a_n-1z^2n-1 + a_n z^2n)
and we have that
|f(x)|^2 = | a_0 e^i n x + ∑_k=0
k≠ n^2na_|k-n|/2 e^i k x|^2 = |P(e^ix)|^2
Therefore, Montgomery's result states in our context the following.
Let P(z) = b_n + b_n-1 z + … + 2b_0 z^n + … + b_n-1 z^2n-1 + b_n z^2n be a symmetric polynomial of even degree 2n ≥ 2, and define
P^*(z) = b_n^* + b_n-1^* z + … + 2b_0^* z^n + … + b_n-1^* z^2n-1 + b_n^* z^2n
where the b_k^* are the numbers |b_k|, permuted so that {b_k^*}_k=0^n is a decreasing sequence. Then for any measurable set Ω⊆𝕋, with measure |Ω| = 2δ we have
∫_Ω |P(z)|^2 (z) ≤ 20 ∫_-δ^δ |P^*(z)|^2 (z)
To the best of our knowledge, it is not known whether the result still holds when reducing the factor 20 to 1 without having any additional hypothesis. For example, this has been proven to be true when the integral is over the whole unit circle, in (Gabriel, 1932) <cit.>.
[Thm. 4, <cit.>]
Given an integer k≥ 1, and the functions
A(θ) = ∑_r=-R^R a_r e^i rθ, A^*(θ) = ∑_r=-R^R a_r^+ e^i rθ
where the a_r^+ are the numbers |a_r| ordered such that a_0^+ ≥ a_-1^+ ≥ a_1^+ ≥ a_-2^+ ≥ a_2^+ ≥…, then
∫_0^2π |A(θ)|^2kdθ≤∫_0^2π |A^*(θ)|^2kdθ
It should be mentioned that this is actually an improvement on the same result appearing in (Hardy and Littlewood, 1948) <cit.>, but which had some additional symmetry hypothesis on the coefficients.
Back to our work, the statement that we will prove is then
Let P be any polynomial of degree n≥ 1. Let Ω⊆𝕋 be a measurable set, with measure | Ω| = 2δ. Suppose it holds that n δ≤π, then
∫_Ω |P(z)|^2 (z) ≤∫_-δ^δ |P^*(z)|^2 (z)
In particular, since they have the same norm, we have c_Ω (P) ≤ c_𝕀 (P^*) where 𝕀 = (-δ, δ).
Here, the rearrangement of the coefficients in P^* will be described in the conclusion of the next <ref>. For example, the rearrangement can be taken as
a_⌈ n/2 ⌉^* ≥ a_⌊ n/2 ⌋^* ≥ a_⌈ n/2 ⌉ + 1^* ≥ a_⌊ n/2 ⌋ -1^* ≥… n
a_n/2^* ≥ a_n/2 +1^* ≥ a_n/2 -1^* ≥… n even
Informally: the largest coefficient is the central one, then the one to the right of the central one, then the one to the left of the central one, then the second to the right of the central one... and so on.
As for the proof of the theorem, we have already done most of the work in <ref>, and we only have left proving that rearranging the (real positive) coefficients of a polynomial increases the integral of its norm squared over the symmetric interval. The main lemma, which will give the explicit rearrangement needed in our case, is the following[It should be noted that this lemma is the discrete version of <ref>, which is rather the natural evolution for continuous functions. Therefore, the final proof for our improvement on the result by Montgomery uses a similar step as the proof given by Donoho and Stark in their continuous result. ].
Consider the form
S(x,y) = ∑_l=0^n ∑_m=0^n s_l-m x_l y_m
Suppose that the coefficients being given satisfy s_0≥ s_1 ≥…≥ 0 and also s_ν=s_-ν, and the variables satisfy x_l≥ 0, y_m ≥ 0, being given in every respect except arrangement. Then among the arrangements for which S assumes its maximum value there is one in which
* x_μ≤ x_μ' if |μ'-n/2| < |μ-n/2|
* no two of x_μ-x_μ', where μ < μ', |μ'-n/2| = |μ-n/2|, have different signs.
and analogous conditions for variables y_ν.
We are now ready to give the proof of our result.
As mentioned, we only have left to prove that for P(z) = a_0 + a_1z + … + a_n z^n with positive real coefficients, it holds that
∫_-δ^δ |P(z)|^2 (z) ≤∫_-δ^δ |P^*(z)|^2 (z)
where P^*(z) = a_0^* + a_1^* z + … + a_n^* z^n is the polynomial with coefficients rearranged as stated for variables x_l,y_m in <ref>.
Now, by a simple computation
∫_-δ^δ |P(z)|^2 (z) = 2 ∑_l,m = 0^n a_l a_m/l-msin( (l-m) δ)
We might now take x_l = y_l = a_l the (positive) coefficients of the polynomial, and the variables s_ν = 2 sin(νδ) ν with the understanding that s_0 = 2δ.
The symmetry of the variables s_ν is clear, and the hypothesis nδ≤π guarantees that they are positive and decreasing, so that we might apply <ref>. Hence, we deduce that
∫_-δ^δ |P(z)|^2 (z) ≤ 2∑_l,m = 0^n a_l^* a_m^*/l-msin( (l-m) δ) = ∫_-δ^δ |P^*(z)|^2 (z)
Combining this inequality with <ref> completes the proof.
plain
get arXiv to do 4 passes: Label(s) may have changed. Rerun ] |
http://arxiv.org/abs/2307.04085v1 | 20230709023446 | Vector Commitments with Efficient Updates | [
"Ertem Nusret Tas",
"Dan Boneh"
] | cs.CR | [
"cs.CR"
] |
Age of FGK Dwarfs Observed with LAMOST and GALAH: Considering the Oxygen Enhancement
Jinghua Zhang
Received August 12, 2023; accepted August 12, 2023
====================================================================================
Dynamic vector commitments that enable local updates of opening proofs
have applications ranging from verifiable databases with membership changes to stateless clients on blockchains.
In these applications, each user maintains a relevant subset of the committed messages and the corresponding opening proofs with the goal of ensuring a succinct global state.
When the messages are updated, users are given some global update information and update their opening proofs to match the new vector commitment.
We investigate the relation between the size of the update information and the runtime complexity needed to update an individual opening proof.
Existing vector commitment schemes require that either the information size or the runtime scale linearly in the number k of updated state elements.
We construct a vector commitment scheme that asymptotically achieves both length and runtime that is sublinear in k, namely k^ν and k^1-ν for any ν∈ (0,1).
We prove an information-theoretic lower bound on the relation between the update information size and runtime complexity that shows the asymptotic optimality of our scheme.
While in practice, the construction is not yet competitive with Verkle commitments,
our approach may point the way towards more performant vector commitments.
§ INTRODUCTION
A Vector Commitment (VC) scheme <cit.> enables a committer to succinctly commit to a vector of elements.
Later, the committer can generate an opening proof to prove that
a particular position in the committed vector is equal to a certain value.
VCs have found many applications in databases and blockchains <cit.> as they enable
a storage system to only store a commitment to the vector instead of the entire vector.
The data itself can be stored elsewhere along with opening proofs.
In a multiuser system, every user might store only one position of the vector along with the opening proof for that position.
Dynamic VCs <cit.> are vector commitments that support updates to the vector.
Suppose the committed vector is of length N and some k < N positions in the vector are updated,
so that a new vector commitment is published.
Then, every user in the system will need to update their local opening proof to match the updated commitment,
and this is done with the help of some global update information U that is broadcast to all users.
This information is typically generated and published by a manager who maintains the entire vector.
Applications of dynamic VCs include verifiable databases, zero-knowledge sets with frequent updates <cit.> and stateless clients for blockchains <cit.>.
The challenge is to design a VC scheme that minimizes the size of the update information U
as well as the computation work by each user to update their local opening proof.
For example, consider stateless clients on a blockchain as an important application for dynamic VCs.
The state of the chain can be represented as a vector of length N, where position i corresponds to the state of account number i.
Every user will locally maintain its own state (corresponding to some position in the vector)
along with an opening proof that enables the user to convince a third party
as to its current state.
Whenever a new block is published, the state of the chain changes.
In particular, suppose k out of the N positions in the vector need to be updated.
The block proposer will publish the update information U along with the new block,
and every user will update their opening proof to match the new committed state of the chain.
Thus, users can ensure that their opening proofs are up to date with respect to the latest committed state of the chain.
We stress that in this application, the data being updated, namely the updated positions and diffs, is published as part of the block.
The update information U only contains additional information that is needed to update the opening proofs.
When we refer to the size of U, we refer to its size, excluding the updated data (i.e., excluding the updated positions and diffs).
In this paper, we investigate the trade-off between the length |U| of the update information and the time complexity of proof updates.
Dynamic VCs can be grouped into two categories in terms of these parameters (Table <ref>).
Tree-based VCs <cit.> enable users to update their proofs in time O(N).
Each opening proof typically consists of (N) inner nodes,
and the update information U contains the changes in the inner nodes affected by the message updates.
Each user calculates its new opening proof by downloading the relevant inner nodes published as part of U.
When k positions are updated, a total of O(k log(N)) inner nodes in the tree are affected in the worst case.
Thus, when each inner node has length Θ(λ), proportional to the security parameter λ,
the update information consists of O(k log(N)λ) bits.
In contrast, algebraic VCs <cit.> enable users to update their opening proofs with only
knowledge of the updated data.
They do not require any additional update information U to be published beyond the indices and the `diffs' of the updated data.
Thus, the length of the update information needed to update the opening proofs is O(1).
However, algebraic VCs typically require each user to read all of the changed messages and incorporate the effect of these changes on their proofs, resulting in Θ(k) work per proof update.
To summarize, while tree-based VCs support efficient calculation of the new opening proofs by publishing a large amount of update information, linear in k,
algebraic VCs do not require any additional update information beyond the updated data, but suffer from a large runtime for proof updates, linear in k.
We formalize the dichotomy of VCs in Section <ref>.
§.§ Our Results
We propose a family of VCs that can support sublinear update, where both the length |U| of the update information and the complexity of proof updates are sublinear in k.
More specifically, our VCs can attain |U| = Θ(k^νλ), ν∈ (0,1), with a proof update complexity of Θ(k^1-ν) operations.
Our candidate construction with sublinear update is a homomorphic Merkle tree, first developed by <cit.>, where each inner node can be expressed as a sum of the partial digests of the messages underneath (Section <ref>).
The algebraic structure of these trees enable each user to calculate the effect of a message update on any inner node without reading other inner nodes or messages.
We identify homomorphic Merkle tree constructions based on lattices, from the literature <cit.>.
In Section <ref>, we provide the update algorithms (Alg. <ref>) for homomorphic Merkle trees, parameterized by ν∈ (0,1).
Our algorithm identifies a special subset of size Θ(k^ν) of the inner nodes affected by the message updates, and publish their new values as U; so that the users need not calculate these values.
These inner nodes are selected carefully to ensure that any inner node outside of U is affected by at most Θ(k^1-ν) updated messages.
Thus, to modify its opening proof, each user has to calculate the partial digests of at most Θ(k^1-ν) updated messages per inner node within its proof (that consists of Θ(log(N)) inner nodes).
Moreover, to calculate these partial digests, the user only needs the `diffs' of the updated messages.
This brings the asymptotic complexity of proof updates to Θ(k^1-ν) operations, while achieving an update information size of Θ(k^νλ) as opposed to Θ(kλ) on Merkle trees using SHA256.
In Section <ref>, we prove an information theoretic lower bound on the size of the update information given an upper bound on the runtime complexity of proof updates.
The bound implies the asymptotic optimality of our scheme with sublinear update.
Its proof is based on the observation that if the runtime complexity is bounded by O(k^1-ν), a user that wants to update its proof cannot read beyond O(k^1-ν) updated messages.
Then, to calculate the effect of the remaining k-O(k^1-ν) messages on its opening proof, the user has to download parts of the structured update information U.
Finally, to obtain the lower bound on |U|, we use Shannon entropy and lower bound the number of bits, namely O(k^νλ), required to capture the total information that will be downloaded by the users; while maintaining the security of the VC with parameter λ.
§.§ Applications
We identify three main applications for VCs with sublinear update.
§.§.§ Stateless clients for PoS Ethereum
Ethereum is the largest decentralized general purpose computation platform by market cap.
Ethereum state (, user accounts) is currently stored in the form of a Merkle tree <cit.> and grows approximately by half every year <cit.>.
Stateless clients <cit.> were proposed to mitigate the problem of state bloat and prevent the state storage and maintenance from becoming a bottleneck for decentralization.
Stateless clients maintain an opening proof to their account balances within the Ethereum state, thus can effortlessly prove the inclusion of their accounts within the latest state.
This enables the other Ethereum clients to verify the transactions that come with opening proofs without having to download the full state and check the validity of the claimed account balances.
Since block verification now requires downloading the proofs for the relevant state elements, Verkle trees <cit.> were proposed as a replacement for Merkle trees due to their short proof size.
Each new Ethereum block contains transactions that update the state elements and their opening proofs.
Archival nodes and block producers still maintain the full state so that they can inform the stateless clients about their new opening proofs.
For this purpose, block producers must broadcast enough information to the clients over the peer-to-peer gossip network of Ethereum.
As minimizing the proof size was paramount to decentralizing verification for blocks, minimizing the update information size becomes necessary for decentralizing the role of the block producer who has to disseminate this information.
However, reducing the length of the update information must not compromise the low overhead of stateless clients by requiring larger number of operations per proof update.
Therefore, the ideal VC scheme for stateless clients must strike a delicate balance between the size of the update information and the runtime complexity of proof updates.
In Section <ref>, we provide the update algorithms (Algs. <ref> and <ref>) for Verkle trees.
We observe that Verkle trees do not support sublinear update, and fall under the same category as tree-based VCs with update information length Θ(k λ).
Despite this fact, Verkle trees are highly practical in terms of updates.
In Section <ref>, we estimate that the update information size after a typical Ethereum block does not exceed |U| ≈ 100 kBytes (compared to the typical block size of <125 kBytes).
Moreover, each Verkle proof can be updated within approximately less than a second on commodity hardware.
In contrast, even the most efficient homomorphic Merkle tree construction <cit.> requires an update information size of 110.88 MBytes and an update time of 32.6 seconds when the trade-off parameter ν is 1/2, despite its asymptotic optimality (Section <ref>).
The large update information size is due to the lattice-based construction of these VCs.
Designing dynamic VCs that are both asymptotically optimal and practically efficient remains an open problem.
§.§.§ Databases with frequent membership changes
VCs with sublinear update can support databases with frequent membership changes.
When a user first registers, a message is updated to record the membership of the user.
The user receives this record and its opening proof, using which it can later anonymously prove its membership.
When the user leaves the system, the message is once again updated to delete the record.
In all these steps, membership changes result in updates to the opening proofs of other members.
When these changes are frequent, it becomes infeasible to distribute new proofs after each change.
VCs with sublinear update offer an alternative and efficient way to update the opening proofs of the users in the event of such changes.
§.§ Related Work
There are many VC constructions, each with different guarantees regarding the proof, commitment and public parameter sizes, verification time, updatability and support for subvector openings <cit.> (cf <cit.> for an SoK of VCs).
First formalized by <cit.>, almost all VCs allow some degree of updatability.
Whereas <cit.> enable updating the commitment and the opening proofs with only the knowledge of the old and the new messages, most VCs require some structured update information beyond the messages when the users do not have access to the internal data structures.
Among the lattice-based accumulators, vector commitments and functional commitments <cit.>, constructions amenable to sublinear update are presented in <cit.>.
Homomorphic Merkle trees were formalized and instantiated by <cit.> in the context of streaming authenticated data structures and parallel online memory checking.
The construction presented in <cit.> offers an alternative VC with sublinear update as it is not a Merkle tree, yet has the property that each inner node can be expressed as a sum of the partial digests of individual messages.
For dynamic accumulators that support additions, deletions and membership proofs, Camacho and Hevia proved that after k messages are deleted, Ω(k) bits of data must be published to update the proofs of the messages in the initial accumulated set <cit.>.
Their lower bound is information-theoretic and follows from a compression argument (Appendix <ref>).
Christ and Bonneau subsequently used a similar method to prove a lower bound on the global state size of a revocable proof system abstraction <cit.>.
As revocable proof systems can be implemented by dynamic accumulators and vector commitments, their lower bound generalizes to these primitives, , after k messages are updated in a dynamic VC, at least Ω(k) bits of data must be published to update the opening proofs (Appendix <ref> for the proof).
They conclude that a stateless commitment scheme must either have a global state with linear size in the number of accounts, or require a near-linear rate of local proof updates.
In our work, we already assume a linear rate of local proof updates, , after every Ethereum block or k messages in our parameterization, and that the message updates are publicized by the blockchain.
We instead focus on the trade-off between the global structured update information size (beyond the published messages) and the runtime complexity of proof updates.
§ PRELIMINARIES
§.§ Notation
We denote the security parameter by λ.
An event is said to happen with negligible probability, if its probability, as a function of λ, is o(1/λ^d) for all d>0.
An event happens with overwhelming probability if it happens except with negligible probability.
We denote the set {0,1,2,…,N-1} by [N].
When y = O(h(x) (x)), we use the shorthand y=O(h(x)) (similarly for Θ(.) and Θ(.)).
The function H(.) ℳ→{0,1}^λ represents a collision-resistant hash function.
We denote the binary decomposition of an integer x by (x), and for c>2, its base c decomposition by _c(x).
A vector of N elements (n_0, …, n_N-1) is shown as (n_i)_i.
The notation 𝐱[i:j] denotes the substring starting at the i^th index and ending at the j^th index within the sequence 𝐱.
In the subsequent sections, k will be used to denote the number of updated messages.
For a prime p, let 𝔽_p denote a finite field of size p.
We use 𝔾 to denote a cyclic group of prime order p with generator g.
The Lagrange basis polynomial for a given x ∈𝔽_p is denoted as L_x(X):
C
L_x(X) = ∏_i ∈𝔽_p
i ≠x X-i/x-i
We will use |G| and |H| to denote the maximum size of the bit representation of a single group element and a single hash value respectively.
We will use T_G and T_f to denote the time complexity of a single group operation and a single function evaluation for the hash functions in Section <ref>.
§.§ Vector Commitments
A vector commitment (VC) represents a sequence of messages such that each message can be proven to be the one at its index via an opening proof.
A dynamic vector commitment allows updating the commitment and the opening proofs with the help of an update information when the committed messages are changed.
Dynamic (updateable) vector commitments can be described by the following algorithms:
KeyGen(1^λ, N) → pp Given the security parameter λ and the size N=(λ) of the committed vector, the key generation algorithm outputs public parameters pp, which implicitly define the message space ℳ.
Commit_pp(m_0, …, m_N-1) → (C, ) Given a sequence of N messages in ℳ and the public parameters pp, the commitment algorithm outputs a commitment string C and the data required to produce the opening proofs for the messages.
Here, contains enough information about the current state of the VC's data structure (, the current list of committed messages) to help generate the opening proofs.
Open_pp(m, i, ) →π_i The opening algorithm is run by the committer to produce a proof π_i that m is the i^th committed message.
Verify_pp(C, m, i, π_i) →{0,1} The verification algorithm accepts (, outputs 1) or rejects a proof.
The security definition will require that π_i is accepted only if C is a commitment to some (m_0, …, m_N-1) such that m = m_i.
Update_pp(C, (i, m_i)_i ∈ [N], (i, m'_i)_i ∈ [N], ) → (C', U, ') The algorithm is run by the committer to update the commitment C when the messages (m_i_j)_j ∈ [k] at indices (i_j)_j ∈ [k] are changed to (m'_i_j)_j ∈ [k]. The other messages in the vector are unchanged. It takes as input the old and the new messages, their indices and the data variable . It outputs a new commitment C', update information U and the new data variable '.
ProofUpdate_pp(C, p((i, m_i)_i ∈ [N], (i, m'_i)_i ∈ [N]), π_j , m', i, U) →π_j'
The proof update algorithm can be run by any user who holds a proof π_j for some message at index j and a (possibly) new message m' at that index.
It allows the user to compute an updated proof π'_j (and the updated commitment C') such that π'_j is valid with respect to C', which contains m'_i, i ∈ N, as the new messages at the indices i ∈ N (and m' as the new message at index i).
Here, p(.) specifies what portion of the old and the new messages is sufficient to update the opening proof.
For instance, the proof update algorithm often does not need the old and the new messages in the open; but can carry out the proof update using only their differences.
In this case, p((i, m_i)_i ∈ [N], (i, m'_i)_i ∈ [N]) = (i, m'_i-m_i)_i ∈ N.
Correctness of a VC requires that ∀ N = (λ), for all honestly generated parameters pp KeyGen(1^λ, N), given a commitment C to a vector of messages (m_0, …, m_N-1) ∈ℳ^N, generated by Commit_pp (and possibly followed by a sequence of updates), and an opening proof π_i for a message at index i, generated by Open_pp or ProofUpdate_pp, it holds that Verify_pp(C, m_i, i, π_i)=1 with overwhelming probability.
Security of a VC is expressed by the position-binding property:
A VC satisfies position-binding if ∀ i ∈ [N] and for every PPT adversary 𝒜, the following probability is negligible in λ:
C
[Verify_pp(C, m, i, π_i) = 1
Verify_pp(C, m', i, π'_i) = 1 m ≠m' pp KeyGen(1^λ, N)
(C, m, m', π_i, π'_i) 𝒜(pp)]
We relax the succinctness assumption of <cit.> and denote a value to be succinct in x if it is (x).
§.§ KZG Polynomial Commitments
The KZG commitment scheme <cit.> commits to polynomials of degree bounded by ℓ using the following algorithms:
KeyGen(1^λ, ℓ) → pp outputs pp = (g, g^τ, g^(τ^2), …, g^(τ^ℓ)) as the public parameters, where g is the generator of the cyclic group 𝔾 and τ is a trapdoor (pp[i] = g^τ^i).
Commit(pp, ϕ(X)) → (C, )
The commitment to a polynomial ϕ(X) = ∑_i=0^ℓ-1 a_i X^i is denoted by [ϕ(X)],
and is computed as [ϕ(X)] = ∏_i=0^ℓ (pp[i])^a_i.
The commitment algorithm outputs C = [ϕ(X)] and = ϕ(X).
Open_pp(m, i, ) →π:
outputs the opening proof π_i that ϕ(i) = m, calculated as the commitment to the quotient polynomial (ϕ(X)-ϕ(i)) / (X-i).
Verify(C, m, i, π) accepts if the pairing check
e(C/g^m, g) = e(π, pp[1]/g^i )
holds.
We refer to <cit.> for the security analysis of this scheme.
§.§ Merkle Trees
Merkle Tree is a vector commitment using a collision-resistant hash function.
In a Merkle tree, hashes of the committed messages constitute the leaves of a c-ary tree of height h = log_c(N), where each inner node is found by hashing its children.
The depth of the root is set to be 0 and the depth of the leaves is ⌈log_c(N) ⌉.
The commitment function outputs the Merkle root as the commitment C and the Merkle tree as .
The opening proof for a message m_x at some index x is the sequence of h(c-1) hashes consisting of the siblings of the inner nodes on the path from the root to the hash of the message m_x.
We hereafter consider binary Merkle trees (c=2) and assume N=c^h = 2^h unless stated otherwise.
Let u_b_0, b_1,…,b_i-1, b_j ∈{0,1}, j ∈ [i], denote an inner node at depth i-1 that is reached from the root by choosing the left child at depth j if b_j=0 and the right child at depth j if b_j=1 (b_0= and u_ is the root).
By definition, for a message m_x at index x, H(m_x) = u_,(x).
§.§ Verkle Trees
A Verkle tree <cit.> is similar to a Merkle tree except that each inner node is calculated as the hash of the KZG polynomial commitment to its children.
Let b_j ∈ [c], j=1, …, h, denote the indices of the inner nodes on the path from the root to a leaf at index x, _c(x) = (b_1, …, b_h), relative to their siblings.
Define f_b_0,…,b_j, j ∈ [h], as the polynomials determined by the children of the inner nodes on the path from the root to the leaf, where f_b_0=f_ is the polynomial determined by the children of the root.
Let C_b_0,…,b_j = [f_b_0,…,b_j], j ∈ [h], denote the KZG commitments to these polynomials.
By definition, u_b_0,…,b_j = H(C_b_0,…,b_j), and the value of the polynomial f_b_0,…,b_j at index b_j+1 is u_b_0,…,b_j+1 for each j ∈ [h].
Here, u_b_0 = H(C_b_0) is the root of the tree, and u_b_0,…,b_h equals the hash H(m_x) of the message at index x.
For consistency, we define C_b_0,…,b_h as m_x.
For example, given h = 3 and c = 4, the inner nodes from the root to the message m_14 have the indices b_0 = 0, b_1 = 3 and b_2 = 2, and they are committed by the polynomials f_, f_,0 and f_,0,3 respectively.
The commitment function Commit_pp(m_0, …, m_N-1) outputs the root u_b_0 as the commitment C and the Verkle tree itself as .
The Verkle opening proof for the message m_x, (x) = (b_1, …, b_h), consists of two parts: (i) the KZG commitments (C_b_0,b_1, …, C_b_0,…, b_h-1) on the path from the root to the message, and (ii) a Verkle multiproof.
The goal of the Verkle multiproof is to show that the following evaluations hold for the inner nodes from the root to the message: f_b_0,…,b_j(b_j+1)=u_b_0,…,b_j+1 = H(C_b_0,…,b_j+1), j ∈ [h].
It has two components: (i) the commitment [g(X)] and (ii) the opening proof π' for the polynomial h(X)-g(X) at the point t=H(r,[g(X)]), where
C
g(X)=∑_j=0^h-1 r^j f_b_0,…,b_j(X)-u_b_0,…,b_j+1/X-b_j+1, h(X)=∑_j=0^h-1 r^j f_b_0,…,b_j(X)/t-b_j+1,
and r=H(C_b_0,..,C_b_0,…,b_h-1,u_b_0,b_1,..,u_b_0,…,b_h,b_1,..,b_h).
Thus, Open_pp(m, i, ) outputs ((C_b_0,b_1, …, C_b_0,…,b_h-1), ([g(X)], π')).
To verify a Verkle proof π = ((C_b_0,b_1, …, C_b_0,…,b_h), (D,π')), the algorithm Verify_pp(C, m, x, π) first computes r and t using u_b_0,…,b_j = H(C_b_0,…,b_j), j ∈ [h], and u_b_0,…,b_h = H(m).
Then, given the indices (x) = (b_1, …, b_h) and the commitments (C_b_0,b_1, …, C_b_0,…,b_h), it calculates
C
y = ∑_j=0^h-1 r^j C_b_0,…,b_j/t-b_j+1 E = ∑_j=0^h-1 r^j/t-b_j+1 C_b_0,…,b_j.
Finally, it returns true if the pairing check e(E-D-[g(X)],[1]) = e(π', [X-t]) is satisfied.
As the degree c of a Verkle tree increases, size of the opening proofs and the runtime of the verification function decreases in proportion to the height h = log_cN of the tree.
This enables Verkle trees to achieve a short opeining proof size for large number of messages (as in the case of the Ethereum state trie) by adopting a large degree (, c=256).
In comparison, each Merkle proof consists of (c-1) log_cN inner nodes, which grows linearly as c increases.
§ FORMALIZING THE DICHOTOMY OF VCS
We first analyze the trade-off between the number of operations required by proof updates and the size of the update information U by inspecting different types of dynamic VCs.
Recall that the number of updated messages is k ≤ N.
§.§ Updating KZG Commitments and Opening Proofs
In the subsequent sections, we assume that each user has access to a dictionary of KZG commitments to the Lagrange basis polynomials L_i(X), i ∈𝔽_p, and for each polynomial, its opening proofs at each point j ∈𝔽_p, j < N.
With the help of this table, one can instantiate a KZG based VC to the messages (m_i)_i ∈ [N], by treating them as the values of the degree N polynomial ϕ(X) at inputs i ∈𝔽_p, i<N.
We next analyze the complexity of the update information and the proof updates in this VC.
The update and proof update algorithms are described by Alg. <ref> in Appendix <ref>.
§.§.§ Update Information
Suppose the vector (i, m_i)_i ∈ [N] is updated at some index i such that m'_i m_i + δ for some δ∈𝔽_p.
Then, the polynomial ϕ(X) representing the vector is replaced by ϕ'(X) such that ϕ'(X) = ϕ(X) if X ≠ i, and ϕ'(i) = ϕ(i) + δ at X = i.
Thus, the new KZG commitment C' to ϕ'(X) is constructed from the commitment C to ϕ(X) as follows:
rCl
C' = [ϕ'(X)] = [ϕ(X)+δL_i(X)]
= [ϕ(X)][L_i(X)]^δ = C ·[L_i(X)]^δ = C ·[L_i(X)]^m'_i-m_i.
If the vector is modified at k different indices i_1,...,i_k from message m_i_j to m'_i_j, j ∈ [k], then
the new commitment C' = [ϕ'(X)] becomes
rCl
[ϕ(X)+∑_j=1^k (m'_i_j-m_i_j) L_x_i_j(X)]
= [ϕ(X)] ∏_j=1^k[L_i_j(X)]^(m'_i_j-m_i_j) = C ∏_j=1^k[L_i_j(X)]^(m'_i_j-m_i_j).
Thus, the commitment can updated given only the old and the new messages at the updated indices, besides the table.
§.§.§ Proof Update
Let π_x denote the opening proof of a polynomial ϕ(X) at a point (x,m_x).
When k messages are updated, the new opening proof π'_x can be found as a function of the old proof π_x and the opening proofs π_i_j,x of the Lagrange basis polynomials L_i_j(X), j ∈ [k], at the index x (m'_x = m_x+∑_j=1^k (m'_i_j-m_i_j) · 1_x=i_j is the new value of m_x after the k updates):
rCl
π'_x = [ϕ'(X)-m_x-∑_j=1^k δ_j ·1_x=i_j/X-x]
= π_x ∏_j=1^k [L_i_j(X)-L_i_j(x)/X-x]^m'_i_j-m_i_j = π_x ∏_j=1^k π^m'_i_j-m_i_j_i_j,x
Thus, the proof can updated given only the old and the new messages at the updated indices, besides the table.
The update information is set to be the empty set, , U = ∅.
§.§.§ Complexity
The size of the update information is constant, , Θ(1).
Each user can update its proof after k accesses to the dictionary, and in the worst case, Θ(k log|ℳ|) = Θ(k) group operations as log(m'_i-m_i)≤log|ℳ| for all i ∈ [N].
§.§ Updating Merkle Trees and Opening Proofs
We next consider a Merkle tree and analyze the complexity of the update information size and the runtime for proof updates.
A simple update scheme would be recalculating the new Merkle tree given all of the old messages or the old inner nodes of the Merkle tree, and the message updates.
However, this implies a large complexity for the runtime of the proof update algorithm that scales as Ω(k) when users keep track of the inner nodes, and as Ω(N) when the users recalculate the tree from scratch at each batch of updates.
Moreover, in many applications, the users do not have access to any messages or inner nodes besides those that are part of the Merkle proof held by the user.
Hence, in the following sections, we describe update and proof update algorithms
that reduce the runtime complexity of the proof updates at the expanse of larger update information (Alg. <ref> in Appendix <ref>).
§.§.§ Update Information
Suppose the vector (i, m_i)_i ∈ [N] is updated at some index x, (b_1,…,b_h) = (x), to m'_x.
Then, the root C=u_b_0 and the inner nodes (u_b_0,b_1, …, u_b_0,b_1,…,b_h), (b_1,…,b_h) = (i), must be updated to reflect the change at that index.
Given the old inner nodes, the new values for the root and these inner nodes, denoted by C'=u'_b_0 and (u'_b_0,b_1, …, u'_b_0,b_1,…,b_h), are calculated recursively as follows:
rCl
u'_b_0,b_1,…,b_h H(m'_x),
u'_b_0,b_1,…,b_j ^
H(u'_b_0,b_1,…,b_j,0, u_b_0,b_1,…,b_j,1) if b_j+1 = 0, j<h
H(u_b_0,b_1,…,b_j,0, u'_b_0,b_1,…,b_j,1) if b_j+1 = 1, j<h
When the messages are modified at k different points i_j, j ∈ [k], the calculation above is repeated k times for each update.
As the updated inner nodes are parts of the Merkle proofs, the update information consists of the new values at the inner nodes listed from the smallest to the largest depth in the canonical left to right order.
For instance,
U = ((, u'_), (0, u'_0), (1, u'_1), (00, u'_00), (10, u'_10), …)
implies that the root u_ and the inner nodes u_0, u_1, u_00 and u_10 were updated after k messages were modified at the leaves of the Merkle tree.
We reference the updated inner nodes using their indices (, U[b_0, b_1 … b_j] = v, when (b_1 … b_j, v) ∈ U).
§.§.§ Proof Update
The Merkle proof π_x for a message at index
x, (b_1, …, b_h) = (x), is the sequence (u_b_1, u_b_1,b_2, …, u_b_1,b_2,…,b_h).
When k messages are updated, some of the inner nodes within the proof might have changed.
A user holding the Merkle proof for index x can find the new values of these inner nodes by querying the update information with their indices.
§.§.§ Complexity
Upon receiving the update information U, each user can update its proof in Θ(log^2(N)+|H| log(N)) = Θ(1) time by running a binary search algorithm to find the updated inner nodes within U that are part of its Merkle proof, and reading the new values at these nodes.
Since modifying each new message results in h = log(N) updates at the inner nodes and some of the updates overlap, |U| = Θ(k log(N/k) (log(N)+|H|)) = Θ(k)|H|, as each updated inner node is represented by its index of size Θ(log(N)) and its new value of size |H| in U.
§.§ Dichotomy of VCs
In the case of KZG commitments, |U| = Θ(1), and there is no information overhead on top of the message updates.
For Merkle trees with an efficient proof update algorithm, |U| = Θ(k)|H|,
thus there is an extra term scaling in Θ(k)|H| = Θ(k)λ, since |H| = Ω(λ) for collision-resistant hash functions.
In contrast, for KZG commitments, each user has to do Θ(k) group operations to update its opening proof; whereas in Merkle trees, each user can update its proof in Θ(1) time, which does not depend on k.
Hence, KZG commitments outperform Merkle trees in terms of the update information size, whereas Merkle trees outperform KZG commitments in terms of the time complexity of proof updates.
Table <ref> generalizes this observation to a dichotomy between algebraic VC schemes and tree-based ones favoring shorter runtimes for proof updates.
The algebraic and tree-based ones outperform each other in terms of the update information size and runtime complexity respectively.
§ VECTOR COMMITMENTS WITH SUBLINEAR UPDATE
We would like to resolve the separation in Table <ref> and obtain a vector commitment, where both the size of the update information and the complexity of proof updates have a sublinear dependence on k.
In particular, |U| = Θ(g_1(k)λ) in the worst case,
and the proof update algorithm requires at most Θ(g_2(k)) operations,
where both g_1(k) and g_2(k) are o(k).
We say that such a VC supports sublinear update.
In this section, we describe a family of VCs with sublinear update, parameterized by the values ν∈ (0,1) and characterized by the functions (g_1,g_2) = (k^ν, k^1-ν).
§.§ Homomorphic Merkle Trees
We first introduce homomorphic Merkle trees where messages placed in the leaves take values in a set ℳ.
We will use two collision-resistant hash functions f̃𝒟×𝒟→ℛ and f ℳ→ℛ,
where both ℳ and 𝒟 are vector spaces over some field 𝔽,
and ℛ is an arbitrary finite set.
We will also need an injective mapping g: ℛ→𝒟, which need not be efficiently computable.
We use g^-1: 𝒟→ℛ to denote the inverse of g,
meaning that g^-1(g(x)) = x for all x ∈ℛ.
We require that g^-1 be efficiently computable.
Now, for j ∈ [h], where h is the height of the tree, every node u_b_0,…,b_j∈𝒟 of the homomorphic Merkle tree is characterized by the following expressions:
llCl
a leaf node: g^-1(u_b_0,(i)) = f(m_i)
an internal node: g^-1(u_b_0,…,b_j) = f̃(u_b_0,…,b_j,0, u_b_0,…,b_j,1) for j < h
The homomorphic property of the Merkle tree refers to the fact that
there are efficiently computable functions
h_i,j: 𝒟→𝒟 for i ∈ [N] and j ∈ [h],
such that every inner node u_b_0,…,b_j∈𝒟 can be expressed as
rCl
u_b_0 = ∑_i ∈[N] h_i,0(m_i)
u_b_0,…,b_j = ∑_i(i)[0:j-1]=(b_1,…,b_j) h_i,j(m_i).
We refer to the function h_i,j as a partial digest function
and refer to h_i,j(m_i) as the partial digest of m_i.
In a homomorphic Merkle tree, every internal node is the sum of the partial digests of the leaves under that node.
We will show in Section <ref> that each function h_i,j can be expressed
as an iterated composition of the functions f and f̃.
Evaluating h_i,j requires evaluating the functions f and f̃ exactly h-j times.
Opening proof for a message consists of both children of the internal nodes on the path from the message to the root (as opposed to Merkle opening proofs that contain only the siblings of the internal nodes on the path).
For instance, the opening proof for the message m_i at leaf index i, with (i) = (b_1,…,b_h), is (i, (u_b_0,…,b_j,0,u_b_0,…,b_j,1)_j=0,…,h-1).
Opening proofs are verified using the functions f and f̃ (not by using the functions h_i,j).
To verify an opening proof (i, (u_b_0,…,b_j,0,u_b_0,…,b_j,1)_j=0,…,h-1) for a message m_i with respect to the root u_b_0, the verifier checks if the following equalities hold:
llCl
for the leaf: g^-1(u_b_0,(i)) = f(m_i)
for the internal nodes: g^-1(u_b_0,…,b_j) = f̃(u_b_0,…,b_j,0, u_b_0,…,b_j,1) for j = h-1, …, 0.
If so, it accepts the proof, and otherwise it outputs reject.
As an example, consider a homomorphic Merkle tree that commits to four messsages m_0,m_1,m_2,m_3.
Then, its root u_ and inner nodes u_,0, u_,1, u_,0,0, u_,0,1, u_,1,0, u_,1,1 can be calculated as follows:
rClrCl
u_ = h_0,0(m_0) + h_1,0(m_1) + h_2,0(m_2) + h_3,0(m_3) ; u_,0,0 = h_0,2(m_0)
u_,0 = h_0,1(m_0) + h_1,1(m_1) ; u_,0,1 = h_1,2(m_1)
u_,1 = h_2,1(m_2) + h_3,1(m_3) ; u_,1,0 = h_2,2(m_2)
u_,1,1 = h_3,2(m_3)
The opening proof for m_3 is given by (3, ((u_,0, u_,1), (u_,1,0, u_,1,1))), and verified by checking the following equations:
llCl
for u_,1,1: g^-1(u_,1,1) = f(m_i)
for u_,1: g^-1(u_,1) = f̃(u_,1,0, u_,1,1)
for u_: g^-1(u_) = f̃(u_,0, u_,1)
It now follows that
when a message m_i is updated to m'_i, each inner node on the path from the leaf to the root can be updated from u_b_0,…,b_j to u'_b_0,…,b_j using the functions h_i,j as follows:
u'_b_0,…,b_j =
h_i,j(m'_i) + ∑_x ≠ i
(x)[0:j-1]= (b_1,…,b_j) h_x,j(m_x) =
u_b_0,…,b_j + h_i,j(m'_i) - h_i,j(m_i)
When the partial digest functions are linear in their input, the expression h_i,j(m'_i) - h_i,j(m_i) can be written as
h_i,j(m'_i) - h_i,j(m_i) = sign(m'_i-m_i)h_i,j(|m'_i-m_i|).
This lets us calculate the updated internal node using only the knowledge of the message diff m_i'-m_i.
We provide examples of homomorphic Merkle tree constructions in Section <ref> with linear partial digest functions h_i,j.
Homomorphic Merkle proofs in these constructions consist of the two siblings of the inner nodes on the path from the proven message to the root (Section <ref>).
Unlike in Section <ref>, homomorphic Merkle trees enable calculating the new inner nodes after message updates using only the new and the old updated messages, in particular using only their difference.
Hence, we can construct a tree that achieves the same complexity for the update information size as algebraic VCs, albeit at the expanse of the proof update complexity, without requiring the users to keep track of the old messages or to calculate the tree from scratch given all messages (Appendix <ref> for further discussion).
This is in contrast to Merkle trees based on SHA256.
The update and proof update algorithms of such a homomorphic Merkle tree with no structured update information and the same asymptotic complexity as algebraic VCs is described in Appendix <ref>.
Since the homomorphic Merkle trees can achieve both extremes in terms of update information size and update runtime (Table <ref>), with a smart structuring of the update information, they can support sublinear update.
We show how in the next subsection.
§.§ Structuring the Update Information
We now describe the new update and proof update algorithms that enable homomorphic Merkle trees to achieve sublinear complexity as a function of the parameter ν (Alg. <ref>).
§.§.§ Update Information
When the messages (i_j, m_i_j)_j ∈ [k] change to (i_j, m'_i_j)_j ∈ [k], the update information U is generated recursively using the following algorithm:
* Start at the root u_b_0. Terminate the recursion at an inner node if there are k^1-ν or less updated messages under that node.
* If there are more than k^1-ν updated messages with indices ≥ N/2, , under the right child, then publish the new right child of the root as part of U, and apply the same algorithm to the subtree rooted at the right child, with u_b_0 and N replaced by u_b_0,1 and N/2 respectively.
* If there are more than k^1-ν updated messages with indices less than N/2, , under the left child, then publish the new left child of the root as part of U, and apply the same algorithm to the subtree rooted at the left child, with u_b_0 and N replaced by u_b_0,0 and N/2 respectively.
The new values of the inner nodes included in U are again listed from the smallest to the largest depth in the canonical left to right order.
§.§.§ Proof Update
When the messages (i_j, m_i_j)_j ∈ [k] are updated to (i_j, m'_i_j)_j ∈ [k], a user first retrieves the inner nodes within its Merkle proof that are published as part of the update information.
It then calculates the non-published inner nodes within the proof using the partial digests.
For instance, consider a user with the proof (u_b_1, u_b_1,b_2, …, u_b_1,b_2,…,b_h) for some message m_x, (b_1, …, b_h) = (x).
To update the proof, the user first checks the update information U and replaces the inner nodes whose new values are provided by U:
u'_b_1,…,b_d U[b_1 …b_d], d ∈ [h], if U[b_1 …b_d] ≠.
Otherwise, the user finds the new values at the nodes u_b_1,…,b_d, d ∈ [h], using the functions h_x,d:
rCl
u'_b_1, …, b_d-1,b_d = u_b_1, …, b_d-1,b_d
+ ∑_j ∈[k] 1_(i_j)[:d] = (b_1, …, b_d-1,b_d) (sign(m'_i_j-m_i_j)h_i_j,d(|m'_i_j -m_i_j|)))
§.§.§ Complexity
Finally, we prove bounds on the complexity given by these algorithms:
Complexity of the update information size and the runtime of proof updates are as follows: g_1(k) = k^ν and g_2(k) = k^1-ν.
We finally show that this VC publishes O(k^ν) new inner nodes in the worst case.
Let 𝒰 denote the subset of the inner nodes published by the algorithm as part of U such that no child of a node u ∈𝒰 is published.
Then, there must be over k^1-ν updated messages within the subtree rooted at each node u ∈𝒰.
Since there are k updated messages, and by definition of 𝒰, the subtrees rooted at the nodes in 𝒰 do not intersect at any node, there must be less than k/k^1-ν = k^ν inner nodes in 𝒰.
Since the total number of published inner nodes is given by 𝒰 and the nodes on the path from the root to each node u ∈𝒰, this number is bounded by k^νlog(N) = Θ(k^ν).
Hence,
|U| = Θ(k^νlog(N)(log(N)+|H|)) = Θ(k^ν)|H| = Θ(k^ν) λ,
which implies g_1(k) = k^ν.
For each inner node in its Merkle proof, the user can check if a new value for the node was provided as part of U, and replace the node if that is the case, in at most Θ(log(N)+|H|) time by running a binary search algorithm over U.
On the other hand, if the new value of a node in the proof is not given by U, the user can calculate the new value after at most k^1-νlog(N) function evaluations.
This is because there can be at most k^1-ν updated messages within the subtree rooted at an inner node, whose new value was not published as part of U.
This makes the total time complexity of a proof update at most
C
Θ(log(N)(log(N)+|H|+k^1-νlog(N)T_f)) = Θ(k^1-ν) T_f,
which implies g_2(k) = k^1-ν.
§.§ Constructions for Homomorphic Merkle Trees
Homomorphic Merkle trees were proposed by <cit.>.
These hash functions are lattice-based, and their collision-resistance is proven by reduction to the hardness of the gap version of the shortest vector problem (𝖦𝖠𝖯𝖲𝖵𝖯_γ), which itself follows from the hardness of the small integer solution problem.
We next describe the construction introduced by <cit.>, which is similar to those proposed by later works <cit.> (an alternative construction is provided in Appendix <ref>).
Its correctness and security follow from <cit.>.
Let L(𝐌) denote the lattice defined by the basis vectors 𝐌⊂ℤ^k × m_q for appropriately selected parameters k,m,q, where m = 2 k log q.
Consider vectors u ∈{0, …, t}^k log q, where t is a small integer.
The homomorphic hash functions f ℤ^k log q→ L(𝐌) and f̃ℤ^k log q×ℤ^k log q→ L(𝐌) used by <cit.> are defined as f(x) = 𝐌x and f̃(x,y) = 𝐌𝐔 x + 𝐌𝐃 y respectively.
Here, 𝐔 and 𝐃 are special matrices that double the dimension of the multiplied vector and shift it up or down respectively.
The remaining entries are set to zero.
For convenience, we define 𝐋 = 𝐌𝐔 and 𝐑 = 𝐌𝐃.
Since the domain and range of the hash functions are different, to ensure the Merkle tree's homomorphism, authors define a special mapping g ℤ^k_q →ℤ^k logq_q from the range of the hash functions to their domain.
Here, g(.) takes a vector 𝐯∈ℤ_q as input and outputs a radix-2 representation for 𝐯.
However, as there can be many radix-2 representations of a vector, to help choose a representation that yields itself to homomorphism, authors prove the following result: for any x_1, x_2, …, x_t ∈ℤ_q, there exists a short radix-2 representation g(.) such that g(x_1 + x_2 + … + x_t q) = b(x_1) + b(x_2) + … + b(x_t) q ∈{0, …, t}^k log q, where the function b ℤ^k_q →{0,1}^klogq returns the binary representation of the input vector.
This equality enables the mapping g(.) to preserve the hash functions' original homomorphic property.
Then, given an inner node u_b_0,…,b_j as input, the homomorphic Merkle tree uses the short radix-2 representation g(.) that enforces the following equality: g(u_b_0,…,b_j) = g(𝐋 u_b_0,…,b_j,0 + 𝐑 u_b_0,…,b_j,1 q) = b(𝐋 u_b_0,…,b_j,0) + b(𝐑 u_b_0,…,b_j,1) q.
Finally, this enables calculating the value of each inner node as a sum of the partial digests h_i,j(.) of the messages m_i under the node u_b_0,…,b_j (, (m_i)_(i)[0:j] = (b_0,…,b_j)) as outlined in Section <ref>, i.e., u_b_0,…,b_j equals
rCl
𝐋g(u_b_0,b_1,…,b_j,0) + 𝐑g(u_b_0,b_1,…,b_j,1)
= 𝐋g(𝐋g(u_b_0,…,b_j,0,0) + 𝐑g(u_b_0,…,b_j,0,1)) + 𝐑g(𝐋g(u_b_0,…,b_j,1,0) + 𝐑g(u_b_0,…,b_j,1,1))
= 𝐋b(𝐋g(u_b_0,…,b_j,0,0)) + 𝐋b(𝐑g(u_b_0,…,b_j,0,1))
+ 𝐑b(𝐋g(u_b_0,…,b_j,1,0)) + 𝐑b(𝐑g(u_b_0,…,b_j,1,1))
= ∑_i(i)[0:j-1]=(b_1,…,b_j) h_i,j(m_i),
where h_i,j(.) is expressed in terms of the bits (i)[j:h-1] = (b'_1, …, b'_h-j):
C
h_i,j(m_i) = f_b'_1(f_b'_2(…f_b'_h-j(f(m_i))))
Here, f_0(.) and f_1(.) are defined as 𝐋b(.) and 𝐑b(.) respectively.
Since b(.), binary expansion, is a linear operation and matrix multiplication is linear, h_i,j(.) is linear in its input.
Opening proof of a message m consists of its index and g(α_i) and g(β_i), i ∈ [h], h = log(N), where α_i and β_i are the children of the inner nodes on the path from m to the root.
The proof can be verified in log(N) time by iteratively checking if f(m) = g^-1(α_h) (or = g^-1(β_h)) and f̃(g(α_i),g(β_i)) = g^-1(α_i-1) (or =g^-1(β_i-1) depending on the message index), where g^-1 returns a number given its radix-2 representation <cit.>.
Note that both f and f̃ are homomorphic hash functions <cit.>.
Other examples of homomorphic hash functions include Pedersen hashes and KZG commitments.
However, the homomorphic property of the hash function is not sufficient for constructing a homomorphic Merkle tree when the function is combined with the output of other functions in a serial manner as in Merkle trees.
For the lattice-based function, this was possible because of repeated linearity <cit.>, which refers to the existence of a linear mapping g(.) from the range to the domain of the hash function.
This mapping enabled the iterative hashing within the Merkle tree to preserve the linearity of the hash function.
Such repeated linearity does not exist for Pedersen hashes and KZG commitments as a linear mapping from the range to the domain would imply the violation of the discrete log assumption.
That is why Verkle trees based on KZG commitments are not homomorphic and cannot support sublinear update.
§.§ A Concrete Evaluation
Suppose the Ethereum state is persisted using the homomorphic Merkle tree construction of <cit.> with the trade-off parameter ν = 1/2.
We next estimate the size of the update information and the proof update time after observing an Ethereum block with ERC20 token transfers.
Suppose the block has the target size of 15 million gas <cit.>, and each token transfer updates the balance of two distinct accounts stored at separate leaves of the homomorphic Merkle tree.
Since each ERC20 token transfer consumes approximately 65,000 gas, there are ∼ 230 such transactions in the block, and the block updates k = 460 accounts.
Suppose the homomorphic Merkle tree has degree 2 and commits to N = 256^3 = 2^24 accounts.
For comparison, 256^3 ≈ 16.7 million, matching in magnitude the total number of cumulative unique Ethereum addresses, which is 200 million as of 2023 <cit.>.
Each opening proof consists of 2log(N) = 48 inner nodes.
When 460 accounts are updated, in the worst case, the update information consists of ⌈√(k)⌉log(N) = 528 inner nodes.
To evaluate its size, we use the parameters calculated by <cit.> for secure instantiations of the homomorphic Merkle trees from both their paper and <cit.>.
Since the parameters for <cit.> result in a large inner node size on the order of hundreds of MBs, our evaluation takes the size of an inner node as that of <cit.>, namely |H| = 0.21 MB (which is equal to the key size in <cit.>).
This implies an update information size of |U| = 110.88 MBytes and an opening proof size of |π| = 10.08 MBytes.
As for update time, in the worst case, each user has to calculate the partial digests of 44 updated messages at each height of the homomorphic Merkle tree, , the effect of these updated messages on each inner node of its opening proof.
Calculating the partial digest of a message at height h measured from the leaves requires h evaluations of the hash function.
This implies a proof update complexity of 2 ∑_i=0^logN-1 i min(⌈√(k)⌉, 2^i) = 11,900 hash evaluations.
To find numerical upper bounds for the update time, we use the hash function evaluation times, namely T_f = 26.84 and T_f = 2.74 ms, published by <cit.> for both the hash function in <cit.> and their new and more performant function (these times are for commodity hardware; <cit.> for the details).
This gives an upper bound of 319.4 and 32.6 seconds for the update time using the hash functions in <cit.> and <cit.> respectively.
Based on the benchmarks for the practical hash function introduced in <cit.>, Table <ref> compares the number of published inner nodes ⌈ k^ν⌉log(N), the total update information size ⌈ k^ν⌉log(N) |H| (assuming that the size of each inner node is |H| upper bounded by 0.21 MBytes), the number of hash function evaluations per proof update 2 ∑_i=0^logN-1 i min(⌈ k^1-ν⌉, 2^i) and the proof update time 2 ∑_i=0^logN-1 i min(⌈ k^1-ν⌉, 2^i) T_f (assuming that each hash evaluation takes less than T_f = 2.74 ms) at ν = 0, 1/4, 1/2, 3/4, 1.
The degree of the homomorphic Merkle tree and the opening proof size are fixed at 2 and 48 inner nodes (|π| = 10.08) respectively.
§ UPDATING VERKLE TREES AND OPENING PROOFS
We now describe the update and proof update functions for Verkle trees (Algs. <ref> and <ref> respectively).
Since Verkle trees were proposed to support stateless clients,
we describe an update scheme that minimizes the runtime complexity of proof updates and does not require the users to download the updated messages or have access to old inner nodes.
As Verkle trees do not support sublinear update, we numerically estimate the size of the update information and the complexity of proof updates in Section <ref>.
§.§ Update Information
Suppose the vector (i, m_i)_i ∈ [N] is modified at some index x, (b_1, …, b_h) = (x) to be m'_x.
Since each inner node is the hash of a KZG commitment, the new inner nodes u'_b_0,…,b_j = H(C'_b_0,…,b_j), j ∈ [h], can be found as a function of the old commitments at the nodes and the powers of the Lagrange basis polynomials as described in Section <ref>:
C
C'_b_0,…,b_h m'_x, C'_b_0,…,b_j C_b_0,…,b_j [L_b_j+1]^(u'_b_0,…,b_j+1-u_b_0,…,b_j+1)
When k messages are updated, the above calculation is repeated k times for each update.
Update information U consists of the new values of the KZG commitments on the path from the updated messages to the Verkle root akin to the Merkle trees, ordered in the canonical top-to-bottom and left-to-right order.
§.§ Verkle Proofs
Let π_x denote the Verkle proof of some message m_x at index x, (b_1,…,b_h) = (x): π_x = ((C_b_0,b_1, …, C_b_0,…,b_h-1), ([g(X)], π)).
We define π^f_x as the opening proof for index x within polynomial f.
We observe that the commitment [g(X)] and the proof π can be expressed as functions of the opening proofs of the inner nodes u_b_0,b_1, …, u_b_0,…,b_h at the indices b_1,…,b_h within the polynomials f_b_0, …, f_b_0,…,b_h-1, respectively:
rCl
[g(X)] = [∑_j=0^h-1 r^j f_b_0,…,b_j(X)-u_b_0,…,b_j+1/X-b_j+1]
= ∏_j=0^h-1 [f_b_0,…,b_j(X)-u_b_0,…,b_j+1/X-b_j+1]^r^j = ∏_j=0^h-1 (π^f_b_0,…,b_j_b_j+1)^r^j.
Similarly, the opening proof π=π^(h-g)_t for index t within the polynomial h(X)-g(X) can be expressed as follows (Appendix <ref>):
rCl
[h(X)-g(X)-(h(t)-g(t))/X-t] = ∏_j=0^h-1 [f_b_0,…,b_j(X)-u_b_0,…,b_j+1/X-b_j+1]^r^j/t-b_j+1
= ∏_j=0^h-1 (π^f_b_0,…,b_j_b_j+1)^r^j/t-b_j+1
We assume that each user holding the Verkle proof π_x for some index x, (b_1,…,b_h) = (x), also holds the opening proofs π^f_b_0,…,b_j_b_j+1, j ∈ [h], in memory.
As we will see in the next section, the user also holds the KZG commitments at the children of the inner nodes on the path from the root to the message m_x, C_b_0,…,b_j,i for all j ∈ [h] and i ∈ [c] in memory.
These opening proofs and KZG commitments are not broadcast as part of any proof; however, they are needed for the user to locally update its Verkle proof after message updates.
§.§ Proof Update
When the messages (i_j, m_i_j)_j ∈ [k] are updated to (i_j, m'_i_j)_j ∈ [k], to calculate the new Verkle proof π'_x, the user must obtain the new commitments C'_b_0, …, C'_b_0,…,b_h-1 on the path from the root to message m_x, the new commitment [g'(X)] and the new opening proof π' for the polynomial h'(X)-g'(X) at index t'= H(r',[g'(X)]).
Message updates change the commitments at the inner nodes, which in turn results in new polynomials f_b_0,…,b_j, j ∈ [h].
Suppose each polynomial f_b_0,…,b_j, j ∈ [h], is updated so that
C
f'_b_0,…,b_j(X) = f_b_0,…,b_j(X) + ∑_i=0^c-1(f'_b_0,…,b_j(i)-f_b_0,…,b_j(i)) L_i(X),
where, by definition, f'_b_0,…,b_j(i)-f_b_0,…,b_j(i) = u'_b_0,…,b_j,i-u_b_0,…,b_j,i = H(C'_b_0,…,b_j,i)-H(C_b_0,…,b_j,i).
Then, given the new and the old commitments (C_b_0,…,b_j,i,C'_b_0,…,b_j,i) for i ∈ [c] and j ∈ [h], the table of Lagrange basis polynomials, and using the technique in Section <ref>, the new opening proofs π̃^f_b_0,…,b_j_b_j+1 after the message updates can be computed as follows for j ∈ [h]:
C
π̃^f_b_0,…,b_j_b_j+1 = π^f_b_0,…,b_j_b_j+1 ∏_i=0^c-1[L_i(X)-L_i(b_j+1)/X-b_j+1]^(H(C'_b_0,…,b_j,i)-H(C_b_0,…,b_j,i)),
where [L_i(X)-L_i(b_j+1)/X-b_j+1] is the opening proof of the Lagrange basis polynomial L_i(X) at index b_j+1.
Once the new opening proofs are found, the new commitment [g'(X)] and the new proof π' become
C
[g'(X)] = ∏_j=0^h-1 (π̃^f_b_0,…,b_j_b_j+1)^r'^j, π' = ∏_j=0^h-1 (π̃^f_b_0,…,b_j_b_j+1)^r'^j/t'-b_j+1
where r'=H(C'_b_0,b_1,..,C'_b_0,…,b_h-1,u'_b_0,b_1,..,u'_b_0,…,b_h,b_1,..,b_h) and
t'=H(r',[g'(X)]).
Note that both r' and t' can be calculated by the user given the new KZG commitments C'_b_0,…,b_j,i for all i ∈ [c] and j ∈ [h].
Finally, to retrieve the new KZG commitments C'_b_0,…,b_j,i for all i ∈ [c] and j ∈ [h], the user inspects the commitments published as part of the update information U:
C'_b_0,b_1,…,b_j-1,i U[b_0,b_1,…,b_j-1,i] if U[b_0,b_1,…,b_j-1,i] ≠ and C'_b_0,b_1,…,b_j-1,i C_b_0,b_1,…,b_j-1,i otherwise, for all i ∈ [c] and j ∈ [h].
In Verkle trees, the user cannot calculate the effect of an updated message on an arbitrary inner node without the knowledge of the inner nodes on the path from the message to the target node.
For instance, suppose U[b_0,b_1,…,b_j-1,i] = for some i ∈ [c] and j ∈ [h], and the user wants to calculate the effect of an update from m_x to m'_x on C'_b_0,…,b_j-1,i,b̃_j+1,…,b̃_h, (x) = (b_1,…,b_j-1,i,b̃_j+1,…,b̃_h) and b̃_j = i.
Then, for each ℓ∈{j,…,h-1}, the user have to find
rCl
C'_b_0,…,b̃_j,…,b̃_h m'_x
C'_b_0,…,b̃_j,…,b̃_ℓ C_b_0,…,b̃_j,…,b̃_ℓ [L_b̃_ℓ+1]^(u'_b_0,…,b̃_j,…,b̃_ℓ+1-u_b_0,…,b̃_j,…,b̃_ℓ+1),
where C'_b_0,…,b̃_j,…,b̃_ℓ are the commitments on the path from the target commitment C_b_0,b_1,…,b_j-1,i to the message m_x.
Hence, the user has to know the original commitments on the path from the message to the target commitment, , keep track of inner nodes, which contradicts with the idea of stateless clients.
This shows the necessity of publishing all of the updated inner nodes as part of the update information.
§.§ Complexity
Suppose each KZG commitment is of size |G| and each hash H(C) of a KZG commitment, each inner node, has size |H|.
Then, updating a single message results in one update at each level of the Verkle tree and requires Θ(h|H|) group operations.
Thus, when k messages are updated, the new Verkle root can be found after Θ(kh|H|) group operations.
As U consists of the published KZG commitments at the inner nodes and their indices, |U| = Θ(k log_c(N)(log(N)+|G|)) = Θ(k)|G|, which implies g_1(k) = k.
The user can replace each KZG commitment at the children of the inner nodes from the root to its message in Θ(log(N)+|G|) time by running a binary search algorithm over U.
Since there are ch such commitments to be updated, , C_b_0,…,b_j,i, i ∈ [c] and j ∈ [h], updating these commitments takes Θ(c h (log(N)+|G|)) = Θ(1) time.
Upon obtaining the new commitments C'_b_0,…,b_j-1,i, i ∈ [c], j ∈ [h], with access to the table of Lagrange basis polynomials, the user can update each opening proof π_b_j+1 (for the function f_b_0,…,b_j),
j ∈ [h], with Θ(c|H|) group operations.
Since there are h such proofs, updating them all requires Θ(c h |H|) group operations.
Given the new proofs, computing the new commitment [g'(X)] and proof π' requires Θ(h |H|) group operations.
This makes the total complexity of updating a Verkle proof Θ(c h + 2 h) |H| T_G + Θ(c h (log_c(N)+|G|)).
For a constant c and h = log_c(N), this implies a worst-case time complexity of Θ(1) |H| T_G for Verkle proof updates, , g_2(k) = 1.
§.§ A Concrete Evaluation
We now estimate the size of the update information and the number of group operations to update an opening proof after observing an Ethereum block consisting of ERC20 token transfers.
As in Section <ref>, suppose the block has the target size of 15 million gas <cit.>, and each token transfer updates the balance of two distinct accounts stored at separate leaves of the Verkle tree.
Then, there are ∼ 230 such transactions in the block, and the block updates k = 460 accounts.
We assume that the Verkle tree has degree 256 ( <cit.>) and commits to 256^3 accounts as in Section <ref>.
Then, each proof consists of 2 KZG commitments, C_,b_1 and C_,b_1,b_2 and a multiproof consisting of the commitment [g(X)] and proof π'.
These components are elements of the pairing-friendly elliptic curve BLS12_381 and consist of |G| = 48 bytes <cit.>.
This implies a proof size of (log_c(N)+1)|G| = 192 bytes (excluding the message at the leaf and its hash value; adding those makes it 272 bytes).
When 460 accounts are updated, in the worst-case, the update information has to contain k log_c(N) (log(N)+|G|) = 460 × 3 × (24+48) Bytes, , 99.4 kBytes.
This is comparable to the size of the Ethereum blocks, which are typically below 125 kBytes <cit.>.
Hence, even though the update information of Verkle trees is linear in k, it does not introduce a large overhead beyond the block data.
Note that the runtime of the proof updates are constant and do not scale in the number of updated messages k, or the Ethereum block size.
On the other hand, in the worst case, an opening proof can be updated after c log(c) |H| + 2 log_c(N) |H| group operations.
Then, with |H|=256, the number of bits output by SHA256, as many as c log_c(N) |H| + 2 log_c(N) |H| = (c + 2) log_c(N) |H| = 774 × 2256 ≈ 200,000 elliptic curve multiplications might have to be made.
Following the benchmarks published in <cit.> for the specified curve, these operations can take up to (c + 2) log_c(N) 0.000665471 ns = 0.52 seconds on commodity hardware, given a runtime of 665471 nanoseconds per exponentiation of a group element with a message hash value.
This is again comparable to the 12 second inter-arrival time of Ethereum blocks.
Table <ref> compares the Verkle proof size |π| = (log_c(N)+1) |G|, update information size |U| = k log_c(N) (log_cN+|G|), the upper bound (c + 2) log_cN |H| on the number of group operations needed for a single proof update and the estimated time it takes to do these operations on a commodity hardware for different values of c, the Verkle tree degree, while keeping the number of accounts and the updated accounts fixed at 2^24 and 460 respectively.
The table shows the trade-off between the Verkle proof and update information size on one size and update complexity on the other.
Comparing Table <ref> with Table <ref> shows that the Verkle tree with any given degree c, 1 < c ≤ 256, significantly outperforms the existing homomorphic Merkle trees in Section <ref> in terms of almost all of proof size, update information size and proof update time.
§ LOWER BOUND
Finally, we prove the optimality of our VC scheme with sublinear update by proving a lower bound on the size of the update information given an upper bound on the complexity of proof updates.
The lower bound is shown for VCs that satisfy the following property.
It formalizes the observation that for many dynamic VCs (, Merkle trees <cit.>, Verkle trees <cit.>, KZG commitments <cit.>, RSA based VCs <cit.>), the opening proof for a message at some index can often act as a commitment to the vector of the remaining messages.
A VC scheme is said to be if the following probability is negligible in λ for all PPT adversaries 𝒜:
C
[Verify_pp(C, m_i^*, i^*, π) = 1
Verify_pp(C', m_i^*, i^*, π) = 1 pp KeyGen(1^λ, N);
π, m_i^*, (m_0, …, m_i^*-1, m_i^*+1, …, m_N-1),
(m'_0, …, m'_i^*-1, m'_i^*+1, …, m'_N-1) 𝒜(pp);
(m_0, …, m_i^*-1, m_i^*+1, …, m_N-1)
≠(m'_0, …, m'_i^*-1, m'_i^*+1, …, m'_N-1);
Commit_pp(m_0, …, m_i^*-1, m_i^*, m_i^*+1, …, m_N-1) = C;
Commit_pp(m'_0, …, m'_i^*-1, m_i^*, m'_i^*+1, …, m'_N-1) = C' ]
To prove the lower bound, we first show that implies that (i^*, m_i^*, π) is a binding commitment to the rest of the vector.
Consider a dynamic and VC, where π is the correctly generated opening proof for the message m_i at some index i.
Then, for any i ∈ [N], it holds that the tuple (i, m_i, π) is a binding commitment to the vector of messages m_j, j ∈ [N], j ≠ i.
Since the VC is , with overwhelming probability, no PPT adversary 𝒜 can find an opening proof π^*, an index i^*, a message m^* and two sequences of messages
such that
C
(m_1, …, m_i^*-1, m_i^*+1, …, m_N-1) ≠(m'_1, …, m'_i^*-1, m'_i^*+1, …, m'_N-1)
and Verify_pp(C, m_i^*, i^*, π) = Verify_pp(C', m_i^*, i^*, π) = 1,
where C and C' are commitments to the message sequences (m_1, …, m_i^*-1, m_i^*, m_i^*+1, …, m_N-1) and (m'_1, …, m'_i^*-1, m_i^*, m'_i^*+1, …, m'_N-1).
Thus, it holds that the tuple (i, m_i, π) is a binding commitment to the vector of messages m_j, j ∈ [N], j ≠ i, with the following new commitment function:
C
NewCommit_pp((m_j)_j ∈[N], j ≠i) = (i, m_i, Open_pp(m_i, i, )),
where = Commit_pp(m_0, …, m_N-1)..
The following lemma shows that all randomized VCs can be derandomized to obtain a deterministic and secure VC as we do not use hiding commitments in this work.
Consider a VC , where the commitment is a random function of the public parameters pp and the committed messages.
Let ' denote the VC that is the same as , except that the randomness is fixed.
Then, ' is a correct and secure VC with at most the same upper bound on the error probability.
Let R denote the sequence of bits sampled uniformly at random from the set ℛ to instantiate the VC .
Since is binding, no PPT adversary 𝒜 can find two different sequences of messages 𝐦 and 𝐦' such that (𝐦, R) = (𝐦', R') for some R,R' ∈ℛ, except with negligible probability.
This implies that for any fixed R^* ∈ℛ, no PPT adversary 𝒜 can find two different sequences of messages 𝐦 and 𝐦' such that (𝐦, R^*) = (𝐦', R^*), except with negligible probability.
Hence, the commitment scheme '(.) = (., R^*) is a position-binding, , secure VC.
Its correctness follows from the correctness of .
Finally, equipped with Lemmas <ref> and <ref>, we can prove the following lower bound for dynamic and VCs.
Consider a dynamic and VC such that for every PPT adversary 𝒜, it holds that
C
[Verify_pp(C, m, i, π_i) = 1
Verify_pp(C, m', i, π'_i) = 1 m ≠m' pp KeyGen(1^λ, N)
(C, m, m', π_i, π'_i) 𝒜(pp)] ≤e^-Ω(λ).
Then, for this VC, if g_2(k) = O(k^1-ν), then g_1 = Ω(k^ν) for all ν∈ (0,1).
Suppose the messages m_i_j, j ∈ [k], are updated to m'_i_j.
Define 𝒮 as the sequence (m'_i_j)_j ∈ [k], and let m'_i = m_i for i ∉{i_j j ∈ [k]}.
Let 𝒫_i, i ∈ [N], denote the user that holds the opening proof π_i for the message m_i at index i, and aims to calculate the new proof π'_i for the message m'_i using π_i, the update information U and the old and the new sequences of messages m_i, m'_i, i ∈ [N].
Suppose g_2 = O(k^1-ν).
Then, there exists a constant α such that each user can read at most α k^1-ν of the updated messages while updating its opening proof.
Let 𝒮_i ⊆ (m'_i_j)_j ∈ [k] denote the sequence of updated messages and their indices, which were not observed by 𝒫_i, and 𝒮_i = 𝒮∖𝒮_i denote the sequence read by 𝒫_i.
Here, |𝒮| denotes the number of messages within the sequence 𝒮.
Since 𝒫_i is assumed to know m'_i, it must be that m'_i ∈𝒮_i.
We next show that each user 𝒫_i that successfully updates its opening proof must download enough bits of U to generate a binding, deterministic commitment to the set 𝒮_i.
By Lemma <ref>, the tuple (i, m'_i, π'_i) is a binding commitment to the sequence of messages (m'_j)_j ∈ [N], j ≠ i.
This implies that the tuple (i, 𝒮_i, π'_i) is a binding commitment to the sequence 𝒮_i.
By Lemma <ref>, the commitment (i, 𝒮_i, π'_i) can be de-randomized to obtain a deterministic commitment C_i to the sequence 𝒮_i (with at most the same upper bound on the error probability).
Let denote the deterministic VC scheme such that C_i = (𝒮_i).
Since is a deterministic function given the public parameters, and the updated messages are sampled independently and uniformly at random, then I(𝒮_i;{m_i}_i ∈ N,𝒮_i|pp) = 0, where I(.;.) is the mutual information.
Moreover, as π_i is a function of the old messages {m_i}_i ∈ N and the randomness of the original VC, I(C_i; π_i|pp) = 0.
Hence, C_i = f(U, i, {m_i}_i ∈ N, π) is a deterministic function of the update information U.
For all i ∈ [k], it holds that |𝒮_i| ≥ k - α k^1-ν and m'_i ∉𝒮_i.
Given these constraints, the minimum number of distinct sequences 𝒮_i is k/α k^1-ν = k^ν/α.
For an appropriately selected β that will be defined later, without loss of generality, let 𝒮_0, …, 𝒮_M-1 denote the first
C
M = min(⌊k^ν/β - α/β - λ/βk^1-ν ⌋, k^ν/α)
distinct sequences.
Since C_i is a deterministic function of U for all i ∈ N, it holds that the Shannon entropy H(.) of U satisfies the following expression:
C
H(U) ≥H(C_0, …, C_M-1)
≥H(C_0) + ∑_i=1^M-1 H(C_i | C_0, …, C_i-1)
As g_2(k) = O(k^1-ν), there exists a constant β such that each user can download at most β k^1-ν bits of data from U.
Then, for all i ∈ [k], it must be that H(C_i) ≤ H(U) ≤β k^1-ν since C_i is a deterministic function of U for each i ∈ [N].
Finally, we show that H(C_0), H(C_i | C_0, …, C_i-1) = Ω(λ) for all i=1, …, M-1.
Towards contradiction, suppose ∃ i^* H(C_i^* | C_0, …, C_i^*-1) = o(λ).
Note that
rCl
H(C_0, …, C_i^*-1) ≤ ∑_i=0^M-1 H(C_i)
≤ min(k^ν/β - α/β - λ/βk^1-ν, k^ν/α) βk^1-ν ≤k-αk^1-ν-λ.
Now, consider an adversary 𝒜 that tries to break the binding property of the VC scheme .
Due to the upper bound on the entropy of (C_0, …, C_i^*-1), it holds that
H(𝒮_i^* | C_0, …, C_i^*-1) ≥λ;
since H(𝒮_i^*) ≥ k-α k^1-ν, and
rCl
H(𝒮_i^*) - H(𝒮_i^* | C_0, …, C_i^*-1) = I(𝒮_i^*; (C_0, …, C_i^*-1))
≤ H(C_0, …, C_i^*-1) ≤k-αk^1-ν-λ.
However, when H(C_i^* | C_0, …, C_i^*-1) = o(λ), for sufficiently large λ, given (C_0, …, C_i^*-1), the adversary can find a collision such that (𝒮_i^*)=(𝒮'_i^*) for two 𝒮_i^*≠𝒮'_i^*, with probability 2^-o(λ).
As this is a contradiction, it must be that H(C_0) and H(C_i | C_0, …, C_i-1) = Ω(λ) for all i < M, and thus, H(U) = Ω(k^νλ) and g_1(k) = Ω(k^ν).
Theorem <ref> shows that the update information length scales as Θ(k^νλ) when the runtime complexity for proof updates is Θ(k^1-ν) and the error probability for the security of the VC is e^-Ω(λ) for a PPT adversary.
When the error probability is just stated to be negligible in λ, then the same proof can be used to show that the update information length must scale as Ω(k^ν(λ)) for any polynomial function of log(λ).
§ CONCLUSION
Dynamic VCs with sublinear update are the key to reducing the size of the global update information while minimizing the runtime of clients synchronizing with the latest commitment.
In this work, we propose a construction that can achieve an update information size of Θ(k^ν) and a proof update time of Θ(k^1-ν) in the number of changed messages k.
Our construction combines a novel update algorithm (Alg. <ref>) with homomorphic Merkle trees <cit.> that allow each inner node to be expressed as a linear function of the underlying messages.
It achieves the smallest asymptotic complexity for the update information size and proof update time.
We also provide update algorithms for the Verkle trees proposed for stateless clients on Ethereum.
The existing instantiations of homomorphic Merkle trees are based on lattices and require relatively large parameters for security.
Consequently, despite the appealing asymptotic complexity of our construction,
its performance for concrete parameters is dominated by Verkle trees.
As such, designing asymptotically optimal and practically efficient dynamic VCs remains an open problem.
An interesting direction is to design a more preferment homomorphic Merkle tree system.
Acknowledgments.
This work was partially funded by NSF, DARPA, the Simons Foundation, and NTT Research.
Additional support was provided by the Stanford Center for Blockchain Research.
Opinions, findings, and conclusions or recommendations
expressed in this material are those of the authors and do not
necessarily reflect the views of DARPA.
plain
§ LOWER BOUND ON THE SIZE OF THE UPDATE INFORMATION
Consider a dynamic accumulator, where k out of N messages m_i,j are updated to m'_i,j≠ m_i_j, j ∈ [k].
Suppose |M| = (λ).
Then, Ω(k (log(N|ℳ|))) bits of information must be published to enable updating the opening proofs after these k updates.
The proof idea is very similar to those presented in <cit.>.
Namely, the update information must contain a minimum amount of bits for the VC to remain correct and secure after the update.
Consider a game between a platform 𝒫 maintaining the data structures of the VC and an adversary 𝒜.
The platform 𝒫 updates k out of N messages m_i,j to m'_i,j≠ m_i_j, j ∈ [k], in a way not known to user 𝒜, and publishes the update information U along with the new commitment value C' (let m'_i = m_i for i ∉{i_j j ∈ [k]}).
Before receiving the update information, 𝒜 knows the old sequence of messages m_i, i ∈ [N], and their opening proofs π_i.
Upon receiving the update information, 𝒜 updates the opening proofs for each message to π'_i.
Then, it must be that for all j ∈ [k], Verify_pp(C', m'_i_j, i_j, π'_i_j) = 1, and for all i ∉{i_j j ∈ [k]}, Verify_pp(C', m_i, i, π'_i) = 1.
Otherwise there would be messages among m'_i, i ∈ [N], for which an updated witness cannot be computed, violating correctness.
Similarly, for all j ∈ [k], Verify_pp(C', m̃, i_j, π'_i_j) = 0 for any m̃≠ m'_i_j; as otherwise the position binding property, thus security, would be violated.
Hence, by calling the function Verify_pp(C', m_i, i, π'_i_j) for each index, 𝒜 can figure out the indices i_j, j ∈ [k], where the messages were updated.
Similarly, by evaluating the function Verify_pp(C', m̃, i_j, π'_i_j) for the |ℳ| possible messages m̃ for each j ∈ [k], 𝒜 can identify the new value m'_i_j of the message at each such index i_j.
Hence, the adversary can recover the sequence (i_j, m'_i_j)_j ∈ [k].
As there are N!/(N-k)!log^k|ℳ| possible sequences (i_j, m'_i_j)_j ∈ [k], it holds that
|U| ≥log(N!/(N-k)!log^k|ℳ|) = Ω(k (logN+log|ℳ|)).
When |M| = Ω((λ)), the minimum number of bits to be published depends on the error probability for the security of the PPT adversary.
As in Remark <ref>, if |ℳ| = Θ(2^λ) and the error probability is e^-Ω(λ), then the same proof can be used to show that Ω(kλ) bits of information must be published.
When the error probability is just stated to be negligible in λ, then the number of bits must scale as Ω(kλ) for any polynomial function of log(λ).
§ HOMOMORPHIC MERKLE TREES WITH NO UPDATE INFORMATION
§.§ Update Information
When k messages are updated, the new commitment, the new Merkle root can be calculated just like any other inner node, by incorporating the effect of the old and the new messages, (i_j, m'_i_j-m_i_j)}_j ∈ [k]:
rCl
C' = u'_b_0 = u_b_0 + ∑_j ∈[k] (h_i_j,0(m'_i_j) - h_i_j,0(m_i_j))
= C + ∑_j ∈[k] sign(m'_i_j-m_i_j)h_i_j,0(|m'_i_j - m_i_j|)
As in KZG commitments, the update information is U=∅.
§.§ Proof Update
When the messages are modified at k points, each user holding a Merkle proof π_x for index x can calculate the new values of the inner nodes within the proof using the old and the new messages and modify the proof respectively.
§.§ Complexity
Calculating each partial digest h_x,j takes at most logN function evaluations.
Then, each user can update each inner node within its Merkle proof after at most klog(N) operations, making the total number of operations Θ(klog^2N) = Θ(k) in the worst case.
The size of the update information U is Θ(1).
Hence, this scheme matches the algebraic VCs in terms of complexity.
§ WHY ARE HOMOMORPHIC MERKLE TREES NEEDED?
Merkle trees based on SHA256 can also achieve complexity sublinear in k, for both the update information and the runtime of proof updates, if the users have access to the old messages and inner nodes of the Merkle tree.
In this case, homomorphism is not needed since the nodes can find the effect of the updated messages on the inner nodes within their Merkle proofs by hashing these messages together with the old inner nodes.
However, this is possible for only a single batch of updates.
Indeed, if this scheme is to be repeated, the assumption of having access to the old inner nodes requires the users to keep track of changes throughout the Merkle tree, by calculating the effect of all updated messages on all inner nodes.
This implies a runtime linear in k per proof updates.
In contrast, homomorphic Merkle trees can maintain a sublinear complexity for future proof updates since they do not require access to the old messages and inner nodes for finding the partial digests of the updated messages.
§ AN ALTERNATIVE CONSTRUCTION
An alternative tree-based VC is proposed by <cit.>, where each inner node is itself a lattice-based VC to its children (akin to Verkle trees <cit.>).
Opening proof for a message consists of the inner nodes (commitments) on the path from the message to the root, along with the opening proofs for these inner nodes with respect to their parent nodes.
The construction again enables expressing each inner node as a sum of partial digests of the messages underneath.
Using the public parameters and the updated inner nodes, users can then derive their updated opening proofs at different heights of the tree.
This construction supports trees of large degrees c without a linear increase in the proof size as would be the case for Merkle trees; this however comes at the cost of a larger runtime complexity for proof updates, proportional to the degree.
Section <ref> describes similar steps in the context of Verkle trees, and exposes the dependence of the runtime complexity of proof updates on the tree degree c.
§ DERIVATION OF THE OPENING PROOF Π^(𝐡-𝐠)_𝐭
Since
rCl
h(X)-g(X) = ∑_j=0^h-1 r^j (f_b_0,…,b_j(X)/t-b_j+1 - f_b_0,…,b_j(X)-u_b_0,…,b_j+1/X-b_j+1)
= ∑_j=0^h-1 r^j (X-t)f_b_0,…,b_j(X)+u_b_0,…,b_j+1(t-b_j+1)/(t-b_j+1)(X-b_j+1),
the opening proof π=π^(h-g)_t for index t within the polynomial h(X)-g(X) is
rCl
[h(X)-g(X)-(h(t)-g(t))/X-t]
= [∑_j=0^h-1 r^j/X-t ((X-t)f_b_0,…,b_j(X)+u_b_0,…,b_j+1(t-b_j+1)/(t-b_j+1)(X-b_j+1)-u_b_0,…,b_j+1/t-b_j+1) ]
= [∑_j=0^h-1 r^j/t-b_j+1 f_b_0,…,b_j(X)-u_b_0,…,b_j+1/X-b_j+1]
= ∏_j=0^h-1 [f_b_0,…,b_j(X)-u_b_0,…,b_j+1/X-b_j+1]^r^j/t-b_j+1 = ∏_j=0^h-1 (π^f_b_0,…,b_j_b_j+1)^r^j/t-b_j+1
§ UPDATE AND PROOF UPDATE ALGORITHMS FOR KZG COMMITMENTS AND MERKLE TREES
|
http://arxiv.org/abs/2307.06279v1 | 20230709050025 | SpreadNUTS -- Moderate Dynamic Extension of Paths for No-U-Turn Sampling & Partitioning Visited Regions | [
"Fareed Sheriff"
] | stat.CO | [
"stat.CO",
"cs.LG"
] |
— Moderate Dynamic Extension of Paths for No-U-Turn Sampling & Partitioning Visited Regions
Fareed Sheriff
May 17, 2023
============================================================================================
§ INTRODUCTION & PRIOR WORK
Markov chain Monte Carlo (MCMC) methods have existed for a long time and the field is well-explored. The purpose of MCMC methods is to approximate a distribution through repeated sampling; most MCMC algorithms exhibit asymptotically optimal behavior in that they converge to the true distribution at the limit. However, what differentiates these algorithms are their practical convergence guarantees and efficiency. While a sampler may eventually approximate a distribution well, because it is used in the real world it is necessary that the point at which the sampler yields a good estimate of the distribution is reachable in a reasonable amount of time. Similarly, if it is computationally difficult or intractable to produce good samples from a distribution for use in estimation, then there is no real-world utility afforded by the sampler. Thus, most MCMC methods these days focus on improving efficiency and speeding up convergence.
We present a cursory overview of popular MCMC techniques. Random-walk Metropolis-Hastings is a rudimentary algorithm for sampling from a distribution by inducing a Markov chain on repeated samples: the next sample is chosen through a draw from the sampling distribution that takes the current sample as a parameter. However, as the name suggests, this exhibits strong random walk behavior, making it undesirable practically due to the possibly long burn-in period and large number of samples needed to thoroughly explore the distribution space. In fact, many MCMC algorithms suffer from random walk behavior and often only mitigate such behavior as outright erasing random walks is difficult. Hamiltonian Monte Carlo (HMC) is a class of MCMC methods that theoretically exhibit no random walk behavior because of properties related to Hamiltonian dynamics. This paper introduces modifications to a specific HMC algorithm known as the no-U-turn sampler (NUTS) that aims to explore the sample space faster than NUTS, yielding a sampler that has faster convergence to the true distribution than NUTS.
§.§ Hamiltonian/Hybrid Monte Carlo
[This subsection summarizes relevant parts of <cit.>]
Hamiltonian dynamics work on a system of position-momentum pairs (p,q) subject to Hamilton's equations
dq_i/dt = ∂ H/∂ p_i, dp_i/dt = -∂ H/∂ q_i
where p,q are vector-valued functions of time over a d-dimensional space and H(q,p) is the Hamiltonian, which represents the system's total energy. We assume for HMC that the Hamiltonian expresses the system's potential and kinetic energies H(q,p) = U(q)+K(p). We also define for HMC U(q) to be the negative of the log density of q up to a constant and K(p) = 12p^TM^-1p to be the negative of the log density of the Gaussian with zero mean and covariance matrix M (often, the Gaussians will be uncorrelated, so M will be diagonal), also up to a constant. We thus rewrite Hamilton's equations to be
dq_i/dt = (M^-1p)_i, dp_i/dt = - ∂ U/∂ q_i
As with MCMC methods as a whole, the Hamiltonian is (time-)reversible and is invariant under Hamilton's equations, meaning the acceptance probability is 1. In practice, it is close to 1 because we cannot practically make the Hamiltonian invariant when solving Hamilton's equations due to error accumulated when solving the PDEs numerically.
To numerically solve the PDEs, we use a symplectic integrator, which preserves the Hamiltonian's invariance under integration of Hamilton's equations. A commonly-used symplectic integrator is the leapfrog integrator, which makes use of a "halfstep" in the integration process to better inform the estimate of the Hamiltonian in the next timestep. The equations that govern the leapfrog integrator are as follows with stepsize :
p_i(t+2) = p_i(t)- /2∂ U/∂ q_iq(t)
q_i(t+) = q_i(t) + p_i(t+2)/m_i
p_i(t+) = p_i(t+2) - /2∂ U/∂ q_i q(t+)
In effect, we compute an estimate of p at t+2, estimate q using this estiamte of p, then again estimate p using the estimate of q at t+, thus taking into account the estimate of p at t+2 and p's relationship with q.
HMC samples from continuous distributions on ^d with well-defined densities and partials of the log densities. We define the joint distribution P of (p,q) on the Hamiltonian H to be
P(q,p) = 1/Ze^-1/TH(q,p)
for any positive constant Z and T. Then,
H(q,p) = U(q)+K(p) → P(q,p) = 1/Ze^-U(q)/Te^-K(p)/T
We choose U(q) to be -logπ(q) for the distribution π from which we are trying to sample. The distribution of K(p) is independent of q, but it is common to use a quadratic like K(p) = p^TM^-1p/2. For diagonal M, this yields K(p) = ∑_ip^2_i/2m_i.
HMC works in two steps. The first step draws a value for momentum p using the zero-centered Gaussian with covariance matrix M. The second step conducts a Metropolis update using the Hamiltonian. Using a stepsize of for L steps, a trajectory of samples is calculated, which is accepted with probability
min(1,exp(U(q)-U(q^*)+K(p)-K(p^*)_H(q,p)-H(q^*,p^*)))
which works exactly because the Hamiltonian is time-reversible.
Practical considerations to take into account when implementing HMC include varying ,L. Note, however, that HMC requires adjustment/setting of the parameters , L.
§ NO-U-TURN SAMPLING
One of the few and biggest problems with HMC<cit.> is the necessity to tune ,L — without proper tuning, we lose many of the efficiency guarantees of HMC. No-U-turn sampling (NUTS)<cit.>] aims to alleviate some of these problems. NUTS is a type of HMC algorithm that does not calculate the trajectory for constant L steps and instead stops the trajectory when sufficient error or space explored has been accumulated. Furthermore, it tunes dynamically to make NUTS an effectively parameterless version of HMC.
NUTS replaces a constant L by stopping the trajectory once some condition has been triggered. This condition is checking that the distance between the proposal q^* and the initial q will not continue to increase. We can check this by taking the product of the momentum and the difference between the sampled proposal and initial proposal (q^*-q)· p^* (the U-turn condition), noting that if it is negative, then the direction of our next step will be toward already-sampled points. Because this does not maintain time-reversibility, NUTS runs the Hamiltonian both forward and backward with equal probability and calculates the U-turn condition between the endpoints of the extension of the trajectory generated in the current iteration, checking that it is nonnegative. NUTS generates the trajectory through a doubling scheme that randomly chooses a direction (forward or backward in time), then on the ith iteration of generating this trajectory takes 2^i timesteps in the chosen direction, adding the calculated points to the current trajectory. A point is chosen as a sample from this trajectory in the following manner: once the trajectory is generated first by sampling some rejection energy threshold u uniformly from [0,P(q,p)] = [0,e^-H(q,p)], extending the point forward and backward in time repeatedly, then uniformly randomly selecting a point from this "tree" of points (trajectory).
§ MODERATE DYNAMIC EXTENSION OF PATHS
We consider two additions to the NUTS scheme: relaxing the U-turn condition checks on the induced binary tree of the generated trajectory with, and increasing the size of the trajectory by more than double every iteration. Our reasoning behind both of these ideas is that the number of U-turn condition checks on the subtrees of the subtrajectory created by the doubling process in NUTS adds excessive (and potentially underjustified) overhead when checking that the U-turn condition is not violated between the two leaves on the edge of each sutree. This overhead is linear in the number of generated points. While it is stated that "except for very simple models with very little data, the costs of these inner products should be negligible compared to the cost of computing gradients" <cit.> (in reference to the inner products calculated when evaluating the U-turn condition), such a rigorous check can in and of itself be counterproductive and could risk cutting off the trajectory being generated before it has sufficiently explored the space around it. This is because while the U-turn condition checks whether the trajectory turns back on itself, if we check for violation between many pairs of points, adjacent or not, this degenerates into a check that the trajectory is always pointing in the direction of unexplored space.
However, this is not a very useful condition to force because we could have a trajectory that moves backward a tiny bit but later continues to move away from previously-explored points, thus exhibiting a general trend toward unexplored space. While we agree that checking that no violation of the U-turn condition should occur between the first few points on the path, we note that as long as the general trend of the path does not violate the U-turn condition, the path contributes to exploring space. We thus strike a compromise: we relax the U-turn condition checks on the balanced tree built on each iteration's points by continuing to check that the U-turn condition is not violated between the leaves on the edge of each subtree of the tree built on each iteration's point, but now build a k-ary tree on the calculated points instead of a binary tree where k is the iteration number. This both decreases the number of U-turn condition checks and iteratively relaxes the strictness of the U-turn violation penalty as more points are generated.
Specifically, instead of doubling the tree by adding 2^k points to the end of our path in direction d∼{-1,1}, we add k^k points and check the U-turn condition fewer times on these points: where we would check the U-turn condition around 2^klog_2k time on these k^k points, we now check the condition k^k-1/k-1≈ k^k-1=2^(k-1)log_2k, which is less than 2^klog_2k by a multiplicative factor of k (which grows asymptotically).
§ PARTITIONING VISITED REGIONS
To prevent ourselves from exploring parts of the distribution that we have already explored, when sampling from the generated trajectory, we bias our selection toward points the space around which we have not already explored. This still satisfies detailed balance because the probability of having already chosen a point from some subspace of the distribution is uniform across all subspaces. Thus, we still have the same convergence guarantees as NUTS. However, we attempt to sample the distribution in a more "spread out" manner by exploring unexplored parts of the trajectory (which itself maintains the invariant of a fixed density) so in the end we still sample in accordance with the distribution's density but with regularization that enforces exploring unexplored parts of the space.
We can keep track of how much space we have explored close to a datapoint using any type of querying data structure that allows us to calculate some measure of how explored the space around a given point is (for example, a multidimensional Gaussian convoluted with all previously-sampled points). For sake of example and efficiency, we consider a k-dimensional binary search tree T on all sampled points that allows us to find the closest point in average-case Ø(logn) time with insertion also taking Ø(logn).
Our metric d_p for how much space has been explored near a given point p will be the squared L_2 norm of p with the closest neighbor in T (sum of squares of difference of coordinates). We then define the probability of choosing p to be proportional to d_p and the metric on all other points of the trajectory so that the probability we select p from trajectory t = (p_0,⋯, p_k) equals
m_p/∑_p_i∈ tm_p_i
We can then choose a point by allocating some proportion of a uniform r.v. to each point and sampling from this uniform to select the point. This is efficient and so the entire procedure allows us to regularize toward sampling the distribution thoroughly while maintaining sampling by density with the cost of a multiplicative Ø(logn) factor to the sampling process.
§ RESULTS
We discuss our testing regime in more detail: we randomly generate mixtures of multivariate Gaussians, which we use to compare how well regular NUTS samples compared to the modified NUTS algorithm presented in this paper by comparing the empirical distributions of each algorithm with the true distribution of the mixtures using a sort of discretized total variation metric. We refer to our algorithm as "SpreadNUTS" because it attempts to spread NUTS trajectories over the sample space to better leave less of the sample space unexplored.[Our code for SpreadNUTS is based on the code at <cit.>, and we test SpreadNUTS against this implementation of NUTS]
§.§ Testing Regime
We randomly select k Gaussian distributions where k is distributed over a discrete uniform that takes values from 1 to 4 (the choice of 5 is arbitrary). We choose the means of the distributions uniformly randomly from the interval [-⃗2⃗0⃗, 2⃗0⃗] (this choice is also arbitrary); we choose the covariance matrix by generating a matrix whose entries are uniformly random over [0,1], multiplying it by its transpose (generating a valid correlation matrix), then multiplying by a draw from a uniform over interval [0,4] (also arbitrary). This ensures the covariance matrix is positive semidefinite (and is also diagonally dominant). We also uniformly randomly choose a dimension for the Gaussians from 1 to 5. Finally, we generate mixture probabilities p⃗ such that the elementwise sum is 1 and each value is nonnegative by generating [0,1] entries, then dividing by the sum of these entries. While this does not yield a uniform distribution (the distribution is biased toward 1⃗D⃗ where D is the dimension and is chosen uniformly from 1 to 3 — the low upper bound on dimension is because for dimensions 4 or higher, regular NUTS tends to perform very slowly and it takes too much time to generate samples), this is okay for our purposes because we desire mixtures biased toward uniformly sampling from each vertex so there is sufficient density for sampling algorithms to actually sample from the Gaussians. This randomly generates Gaussian mixtures. Our choice of using Gaussian mixtures was arbitrary and based primarily on convenience of sampling through methods other than Monte Carlo.
We define our discretized total variation metric by randomly sampling from the Gaussian mixture (which we do by randomly sampling from each Gaussian, then choosing a number of samples from each collection of samples proportional to the probability of the Gaussian relative to the rest of the mixture). We then generate a relative empirical pdf by discretizing the interval from -⃗2⃗0⃗ to 2⃗0⃗ into 0.1-unit squares, calculating the proportion of samples in each square. Our discretized total variation metric m_TV is calculated by taking the absolute difference between the relative empirical pdfs of the samples generated from each algorithm and the relative empirical pdf generated by sampling directly from Gaussians weighted by the relative empirical pdf of the Gaussians. Our comparison between the two algorithms is done by both looking at both the ratio and actual values of m_TV between the algorithms and the mixture samples over choice of dimension. We also compare this with the m_TV between the Gaussian mixtures resampled again in order to obtain a means of roughly evaluating how well our algorithm performs both relative to NUTS and relative to a true sampler.
§.§ Results & Conclusion
We compare the m_TV metric between NUTS and SpreadNUTS by plotting them against each other and samples resampled from the mixture as well as by plotting the log of the m_TV ratio between NUTS and SpreadNUTS as well as between each algorithm and samples resampled from the mixture. In the first plot, the lower the m_TV, the better. In the second plot, the close to 0 the score the better; specifically, the log of the ratio between the algorithm and resampled mixture should ideally be close to 0 because this indicates it performs as well as samples from the mixture. We then discuss trends we noticed and provide examples of plots to compare NUTS to SpreadNUTS visually.
The following is a plot of m_TV vs. dimension for NUTS, our algorithm, and samples from a Gaussian mixture all compared against samples from a Gaussian mixture. Note that we compare two distinct draws from a Gaussian mixture with each other when calculating the m_TV to estimate how much of the m_TV of the algorithms is due to randomness attributed to relatively small sample size (we sample 10000 points per mixture and discard the first 500 as burn-in). Alongside it is a comparison of ratios between NUTS m_TV and our algorithm's m_TV with the mixture m_TV vs. dimension to see how close to a random sample the two algorithms get to m_TV.
The following are plots of m_TV ratio with the mixture m_TV for varying values of k (the number of Gaussians in the mixture) after fixing dimension.
The above shows that for dimension 1, NUTS performs better than SpreadNUTS; however, for higher dimensions, SpreadNUTS gets closer and closer to Gaussian sampling, suggesting that it handles density islands better than NUTS.
We note some interesting idiosyncracies of SpreadNUTS: in spite of the fact that it tends to perform better than NUTS in higher dimensions, what might actually be going on is that when the distance between "islands" of density in a distribution is sufficiently small enough for classical NUTS to feasibly leap across islands, SpreadNUTS simply makes it more likely that we will actually leap across islands. However, when the distance between these islands is too large for classical NUTS to reasonably travel between islands, SpreadNUTS cannot increase a low probability of traversing these islands enough for it to happen often. Thus, we conclude that while SpreadNUTS may increase the probability of traversing relatively high-density portions of the distribution relative to classical NUTS, it only attempts to "smooth" sampling across parts of the sample space that classical NUTS explores — it cannot explore parts of the sample space that classical NUTS does not explore. We examine two examples that showcase this trend: a 2d Gaussian mixture consisting of two distributions (μ,I_2),(-μ, I_2) with equal weight on both. In the first figure, μ = ⟨2.5,2.5⟩; in the second figure μ = ⟨5,5⟩. We compare SpreadNUTS to NUTS and see that while SpreadNUTS increases the probability of traversing these islands relative to classical NUTS, SpreadNUTS does not traverse the islands when classical NUTS does not. Furthermore, looking at the above figures, we can see that on the whole, SpreadNUTS m_TV gets closer to Gaussian sampling as dimension increases while NUTS first increases at dimension 2, then decreases at dimension 3 but still with significantly greater m_TV than either Gaussian sampling or SpreadNUTS sampling. We note that the number of dimensions used was small (3) and the number of Gaussians in the mixture was from 1 to 4; furthermore, the number of samples was 9.5K for each sampling method. Some error may have been introduced in the relatively small number of samples. A bigger point of contention is that the number of dimensions was too small to make any concrete claims about the efficacy of NUTS vs. SpreadNUTS and the use of Gaussian mixtures as our sample distribution may have introduced some bias that helps SpreadNUTS sample better than NUTS. There is more testing to be done, but we tentatively conclude that SpreadNUTS alleviates to some degree the lack of sample space exploration present in NUTS.
unsrt
§ APPENDIX
We derive the gradient and log-likelihood of Gaussian mixture M ∼∑^Nπ_i(μ_i, Σ_i). The likelihood (for a single datapoint x) is
p_M(x|π,μ⃗,Σ⃗) = ∑_i=1^Nπ_i(x|μ_i,Σ_i)
and the log-likelihood is
lnp_M(x|π, μ⃗,Σ⃗) = ln(∑_i=1^Nπ_i(x|μ_i,Σ_i))
For a single Gaussian, this devolves to c -0.5 (μ- x)^TΣ^-1(μ-x) for extra constant c = -0.5ln(|Σ^-1|(2π)^k).
Then, the gradient of the log-likelihood w.r.t. μ⃗ is
∂ln(p_M(x|π, μ⃗, Σ⃗))/∂μ⃗ = 1/∑_iπ_i(x|μ_i,Σ_i)·∂ p(x|π,μ⃗,Σ⃗)/∂μ⃗
∂ p(x|π,μ⃗,Σ⃗)/∂μ⃗ = ∑_i∂π_i(x|μ_i,Σ_i)/∂μ_i
∂π_i(x|μ_i,Σ_i)/∂μ_i = ∂/∂μ_i(π_i√(|Σ^-1_i|(2π)^-k)exp(-1/2(μ_i-x)^TΣ^-1_i(μ_i-x))) = Σ^-1(x-μ_i)π_i(x|μ_i,Σ_i)
∂ln(p_M(x|π, μ⃗, Σ⃗))/∂μ⃗ = ∑_iΣ^-1(x-μ_i)π_i(x|μ_i,Σ_i)/∑_iπ_i(x|μ_i,Σ_i)
For a single Gaussian, this simplifies to Σ^-1(x-μ).
As an aside, our testing regime experiences compounding rounding errors when exponentiating and taking logs, specifically when we take the log of the exponential of a number close to 0, which rounds to 0. We attempt to alleviate this problem by expressing the proportions of the normal likelihoods π_i(x|μ_i,Σ_i) to the sum of the normal likelihoods as the exponential of the difference of the log likelihood and the log of the sum of likelihoods, where we calculate the log of the sum of likelihoods by summing logs as below:
log(x+y) = log(x(1+yx)) = logx + log(1+yx) = logx + log(1+e^logy-logx)
log∑_ix_i = log(x_1(1+1/x_1∑_i=2^kx_i)) = logx_1 + log(1+e^log∑_i>1x_i-logx_1)
log∑_i>1x_i = logx_2 + log(1+e^log∑_i>2x_i-logx_2)
x_i/∑x_i = exp(logx_i-log∑x_i)
Thus, we can recursively express the log of sums as the sum of log sums (in practice, we sort the Gaussian pdfs when evaluating logs to minimize error at each step, yielding a technique known as LogSumExp or LSE). This helps decrease error accumulated when summing likelihoods because of the error introduced when summing exponentials.
|
http://arxiv.org/abs/2307.05330v1 | 20230708201724 | The Value of Chess Squares | [
"Aditya Gupta",
"Shiva Maharaj",
"Nicholas Polson",
"Vadim Sokolov"
] | cs.AI | [
"cs.AI",
"cs.LG"
] |
Typology of Risks of Generative Text-to-Image Models
Atoosa Kasirzadeh
====================================================
Valuing chess squares and determining the placement of pieces on the board are the main objectives of our study. With the emergence of chess AI, it has become possible to accurately assess the worth of positions in a game of chess. The conventional approach assigns fixed values to pieces (=∞, =9, =5, =3, =3, =1). We enhance this analysis by introducing marginal valuations for both pieces and squares. We demonstrate our method by examining the positioning of Knights and Bishops, and also provide valuable insights into the valuation of pawns. Notably, Nimzowitsch was among the pioneers in advocating for the significance of Pawn structure and valuation. Finally, we conclude by suggesting potential avenues for future research.
Key Words: AI, AlphaZero, Bayes, Chess, Deep Learning, Neural Network, Chess Piece Values, Knights, Bishops, Pawns.
Chess is not a game. Chess is a well-defined form of computation. You may not be able to work out the answers, but in theory, there must be
a solution, a right procedure in any position. —John von Neumann
§ INTRODUCTION
Chess AI was pioneered by <cit.>, <cit.>, and <cit.>, who developed algorithms for solving chess. Shannon's approach was one of trial and error and “learning” the optimal policy. Turing (and Champernowne) valued the pieces marginally. They had the following positional evaluation functions: piece mobility, piece safety, king mobility, king safety, and castling. Modern day methods are based on state dependent objective function evaluation via learning (a.k.a reinforcement learning) <cit.>. Solving Chess is a daunting NP-hard computational problem, with the Shannon number, which measures the number of possible board states, being (with legal moves). A major advance over pure look-ahead calculation engines are deep neural networks which interpolate the value and policy functions from empirical game playing. For example, AlphaZero uses self-play to allow quick solution paths to be calculated and “learns" chess in less than four hours without any prior knowledge, see <cit.> and <cit.> for further discussion.
While much recent work has been done in Chess AI, the question of the value of a chess square has not yet been explored. In this work, we propose a system to measure the advantage/disadvantage offered by control of particular chess squares with different pieces. In particular, we propose a method for measuring the advantage/disadvantage states of the form s ∈Color×Piece×Square.
For example, the notion that certain state combinations, such as having a White on f5 provides an advantage to White players is a widely held belief in the world of chess. We analyze these key combinations to see whether the games of high-level chess grandmasters provide merit to this belief. Our investigation will shed light on the strategic nuances and patterns that emerge from such positions and contribute to the understanding of chess at the highest level of play.
To value pieces on squares, we create a Neural Network to analyze a dataset of Grandmaster games and make predictions regarding winning probabilities. This uses Centipawn evaluations for specific subsets of chess states involving Knight and Bishop pieces. The results show that our model successfully generated predictions for White Knights and Bishops, as well as Black Knights and Bishops. The predictions provided valuable insights into the advantages and disadvantages associated with different states and positions on the chessboard. For example, the analysis revealed that Knights placed in the corners of the board had lower winning probabilities, likely due to their limited mobility and restricted influence. On the other hand, as Knights moved closer to the opponent's side, their positional value tended to increase, potentially allowing them to infiltrate enemy territory and exert greater control over the game. The study's results enhance the understanding of chess strategies and gameplay dynamics, aiding in strategic decision-making and the evaluation of different gameplay approaches.
Several chess maxims are reflected in our neural network predictions. For example, Pawns are observed to gain in value as they cross the 4th rank, highlighting the significance of advancing pawns beyond this milestone. Pawns positioned on the h and a files on the 5th rank are particularly powerful, contributing to central control and potential attacking opportunities. Pawns on the 6th rank, especially when supported by a pawn on the 5th rank, become highly threatening. Edge pawns tend to be weaker compared to central pawns, emphasizing the importance of controlling central squares. Additionally, kingside pawns are often more dangerous when advanced than queenside pawns, influencing the dynamics of the game.
Important squares for the white pawn are identified by examining the highest Centipawn evaluation c(s) values in each column. The squares e4, h4, c5, and h6 are highlighted as critical positions for white pawns. Occupying these squares provides advantages, such as central control, support for piece development, and potential attacking opportunities.
Similarly, for black pawns, the squares f5, d5, c4, d3, and f3 emerge as key positions. Placing pawns on these squares enhances black's control of central areas, supports piece coordination, and enables counter-play against white's position.
Understanding the significance of these key squares and applying the derived insights allows players to make informed decisions regarding pawn placement, pawn breaks, and strategic plans. This knowledge empowers players to optimize their pawn structures, control critical areas of the board, and leverage their pawns to gain a competitive advantage in the game.
The rest of the paper is outlined as follows. Section <ref> provides connections with previous literature. Section <ref> goes over the methods we used. Section <ref> provides an application of the proposed methods to Grandmasters and Magnus Carlsen, the World Chess Champion. Section <ref> provides an application to Pawns. Finally, Section <ref> concludes.
§.§ Connections with Previous Work
In the field of Chess AI, previous research has primarily focused on predicting the probabilities of winning w(s) and Centipawn evaluations c(s) for more simplified states. <cit.> explored simpler states where s belongs to the set of Piece. In their work, they utilized Logistic Regression methods to determine the value of a chess piece by creating a model that predicts the outcome of a game based on existing piece imbalances in a given position. A recent lichess study also tried similar approaches <cit.> <cit.>.
Building upon this previous work, our research extends the scope by proposing an augmented state representation s that encompasses Color×Piece×Square, thereby incorporating the square (location) information as an additional component of the state. This augmentation enables a more comprehensive understanding of the game dynamics by considering both the piece and its position on the board. Furthermore, we employ Neural Networks as our chosen methodology, allowing us to capture and model the intricate relationships between the state s and its corresponding Centipawn evaluation c(s).
One crucial distinction between our proposed approach and previous methodologies lies in the predictive target. While prior research focused on predicting the binary outcome of the game (win or loss), our proposed model aims to predict the Centipawn evaluation c(s) instead. By doing so, we shift the focus towards assessing the advantage or disadvantage of a particular chess position, providing more granular information beyond a simple win/loss prediction.
By using the augmented state representation and employing Neural Networks, our proposed model offers a more comprehensive and nuanced analysis of the chess game. This allows us to capture the intricate interplay between the color, piece type, square, and Centipawn evaluation, providing a deeper understanding of the factors influencing the game's outcome.
In the realm of Chess AI research, <cit.> made significant strides by employing Q-learning methods, as discussed in Section <ref>, with a specific focus on chess gambits. Their work aimed to uncover key characteristics and insights associated with these strategic opening moves by calculating Q-values for various chess gambits. This initial exploration into the application of Q-learning in analyzing and understanding chess gambits laid a solid foundation for further research in this field.
This paper extends the work of <cit.> and proposes novel architectures that can predict the probabilities of winning w(s) and Centipawn evaluations c(s) for all possible states s ∈Color×Piece×Square. While previous work focused on specific subsets of states, particularly those related to gambits, our approach seeks to encompass the entire chessboard by incorporating the color, piece type, and square information into a comprehensive state representation.
By embracing a wider scope of analysis that covers all possible states, our research aims to provide a more comprehensive understanding of the game, surpassing the limitations imposed by narrow subsets. To achieve this, we employ advanced techniques, such as Neural Networks, to capture the intricate relationships between the components of a state and the corresponding probabilities of winning w(s) and Centipawn evaluations c(s). This allows us to offer valuable insights into the dynamics of chess gameplay across a vast array of states, thereby providing a more holistic and comprehensive analysis.
Through our research, we strive to advance the field by developing robust and effective models capable of accurately predicting the probabilities of winning and assessing the Centipawn evaluations for any given state. By considering the full spectrum of states represented by Color×Piece×Square, our proposed architectures pave the way for a deeper understanding of chess strategies. They enable us to evaluate the efficacy of these strategies and unravel the intricacies of the game, ultimately contributing to the development of more sophisticated and intelligent Chess AI systems.
§ CHESS PIECE AND SQUARE VALUATION
Our work will provide values for states consisting of a combination of pieces and squares For example, we make wish to assess the value of a fianchetto bishop of the queen's side ad that bishop controls a key diagonal. We denote this value by
V ( , b2 )
or a white knight on a good outpost such as f5, wish is denoted
V ( , f5). As valuation will be based on the probability of winning, as calculated by a chess engine, the law of probability gives us a key identity
V ( ) = ∑_position V ( , position ),
where the sum is taken over all future positions. Hence, we can see that the initial value of the knight (a.k.a V ( )=3 comes from its total use throughout the game. Once the pieces have moved, there's a different marginal values. Our goal is to be able to assess values such as V ( , f5).
The commonly used chess piece valuations are given by
( , , , , , ) = ( ∞ , 9 , 5 , 3, 3 ,1 )
These were modified in <cit.> through the use of Machine Learning techniques to be ( , , , , , ) = ( ∞ , 8.9 , 4.6 , 3.3, 3 ,1 ) and in a recent lichess study on finding the value for pieces finds
( , , , , , ) = ( ∞ , 9.82 , 4.93 , 3.28, 3.16 , 1 ). We build on this line of research by adding square position to the state vector.
§.§ Centipawn Evaluation and Optimal Play
In our approach, we begin by formalizing the theoretical functions used in Q-learning. The value function, denoted as V(s), represents the probability of winning the game given a specific state s. This state s belongs to the set Color×Piece×Square, and it is worth emphasizing that V(s) is calculated with respect to the color parameter in any given state.
To assess any legal chess position, we derive a Centipawn evaluation denoted as c(s). The Centipawn serves as a measurement unit for evaluating the advantage in chess, where one Centipawn is equal to 1/100 of a pawn. The win probability w(s) can be directly obtained from c(s) using the following equation:
w(s) = ℙ(winning|s) = 1/1+10^-c(s)/4, and c(s) = 4log_10(w(s)/1-w(s)).
For example, if White has a c(s) =0.2 advantage, then the win probability is w(s) = 0.526.
To address the sequential decision problem, we employ the dynamic programming technique known as Q-learning. This methodology involves breaking down the decision problem into smaller sub-problems. A key principle utilized in Q-learning is Bellman's principle of optimality, which states:
Bellman Principle of Optimality: An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. (Bellman, 1957)
To solve this sequential decision problem, we employ Backwards Induction, which determines the most optimal action at the last node in the decision tree (i.e., the checkmate position). Utilizing this information, we can then determine the best action for the second-to-last decision point, and this process continues backward until we identify the optimal action for every possible situation, effectively solving the Bellman equation.
In recent years, the field of artificial intelligence has witnessed significant advancements, particularly in the realm of AI algorithms like deep learning, alongside the development of remarkably powerful computer chess engines. These technological breakthroughs have revolutionized the way we evaluate and understand chess positions, enabling us to delve into the intricacies of the game with unparalleled precision.
One notable achievement stemming from these advancements is the ability to accurately assess chess positions. By leveraging AI algorithms, particularly deep learning techniques, we can now analyze and comprehend chess moves and strategies in a manner that was previously unimaginable. These algorithms have been specifically designed to process vast amounts of data, learn from patterns, and make informed decisions, ultimately resulting in highly accurate evaluations of chess positions.
Moreover, the advent of advanced computer chess engines, exemplified by the likes of Stockfish 15 <cit.>, has played a pivotal role in shaping the landscape of chess analysis and study. These engines, meticulously crafted through a combination of cutting-edge algorithms and extensive programming, have transformed the way chess is played and understood.
Gone are the days when determining the optimality of specific chess lines of play relied solely on human intuition and analysis. The emergence of chess engines has effectively shifted the burden from human players and theorists to these intelligent systems. By leveraging their computational power and algorithmic prowess, chess engines have assumed the responsibility of assessing various lines of play, thus solving the Bellman equation.
By adhering to Bellman's optimality condition, computer chess engines fulfill the requirements of possessing complete knowledge about the chess environment and evaluating all possible actions and their consequences. Through this rigorous analysis, they provide insights into the optimal move in a given position
§.§ Q-Values
The corresponding Q-value represents the probability of winning, given a policy/move a in a given state s, by following the optimal Bellman path thereafter:
Q(s, a) = ℙ(winning|s, a).
To address the optimal sequential decision problem, we employ Q-learning, which calculates the Q-matrix (<cit.>, <cit.>), denoted as Q(s, a) for a given state s and action a. The Q-value matrix describes the value of performing action a and then acting optimally thereafter. The current optimal policy and value function can be expressed as follows:
V(s) = amax Q(s, a) = Q(s, a^*(s)) where a^*(s) = argmax_a Q(s, a).
The policy function establishes the optimal mapping from states to actions, and by substituting the Q-values, we obtain the value function for a given state.
In Section <ref>, we introduce a Neural Network architecture designed specifically for predicting the value of c(s) given the state s. By harnessing the predictive capability of this Neural Network, we can subsequently determine the probability of a player winning, denoted as w(s), based on their corresponding state s.
The Neural Network model comprises interconnected layers, including an input layer that accepts the state s as input. Through a series of computations within the hidden layers, the model captures complex relationships and patterns inherent in the input data. Ultimately, the output layer produces the predicted value of c(s).
By employing this trained Neural Network model, we can make predictions of c(s) for unseen states s. These predicted values can then be utilized to compute the probability of a player winning, denoted as w(s). The specific relationship between c(s) and w(s) is contingent upon the characteristics and dynamics of the chess game under analysis.
With the ability to predict w(s), we gain valuable insights into the probability of a player winning based on their current state s. This information can be harnessed in various ways, including evaluating strategic moves, assessing the overall advantage or disadvantage of specific board configurations, and guiding decision-making during gameplay.
The Neural Network's capacity to capture intricate patterns and relationships within the input data significantly contributes to more accurate predictions and a deeper understanding of the dynamics of the chess game. By incorporating the predicted values of c(s) and computing the corresponding probabilities of winning, we enhance our analytical capabilities and facilitate informed decision-making in the context of chess gameplay.
§.§ Neural Network Architecture
We design a specific 3-layer Neural Network aimed at predicting the value of a chess square and piece combination, denoted as c(s) for s ∈Color×Piece×Square, as shown in Figure <ref>. This model incorporates a hyperbolic tangent (tanh) activation function as a key component of its architecture. By applying the tanh activation function to the network layers, the model becomes capable of capturing and processing intricate patterns and relationships within the input data.
To ensure effective training of the model, we curate a meticulously crafted dataset. This dataset consists of two essential elements: the state information, represented by s, and the corresponding critical power level (CPL) recorded for each state. The state information encompasses relevant factors, variables, or parameters that define the chessboard system or environment.
Through supervised learning using this dataset, the model learns to associate the given state information with the corresponding CPL. Consequently, it acquires the ability to predict the CPL based on the provided state information as input. This training process involves iteratively adjusting the model's parameters to minimize the disparity between its predictions and the actual CPL values present in the training dataset.
The selection of the tanh activation function holds particular significance for our chess square and piece prediction model. The tanh function introduces non-linearity into the model, enabling it to capture complex relationships specific to chessboard configurations. This non-linearity allows the model to interpret intricate patterns and dependencies between the input variables and the output, facilitating more accurate predictions.
Furthermore, the tanh activation function maps the input values into the range [-1, 1], which is well-suited for our chess-related application. This bounded output range ensures that the model's predictions for critical power levels remain within a specific value range, aligning with the constraints and limitations inherent to chess strategies.
By incorporating the tanh activation function and training the model on the state information and corresponding CPL data, our proposed model strives to provide a robust and dependable framework for predicting critical power levels in various chess scenarios. Its ability to capture the intricate relationships specific to chess squares and pieces makes it particularly valuable for tasks such as evaluating the relative strength of different board configurations, predicting advantageous moves, and assisting in strategic decision-making during chess gameplay.
§.§ Data
In order to train the Neural Network effectively, a training dataset is constructed, comprising two essential components. This dataset consists of elements that contain both the state information denoted by s, as well as the corresponding evaluation associated with that particular state.
To gather the necessary chess game data for analysis, a vast mega database containing millions of previously played chess games is utilized. Within this database, each game is represented using the Portable Game Notation (PGN) notation, which allows for standardized representation and compatibility with various chess software and applications.
The process of constructing the training dataset involves parsing and evaluating all positions p within each game. The Forsyth-Edwards Notation (FEN) is employed to determine the location of relevant chess pieces within each position p. As a result, all states s ∈ p are extracted and added to the training dataset. To navigate through the moves of each chess game systematically, the Python Chess library is utilized. This library provides a comprehensive set of functions and classes specifically designed for working with chess games and positions, enabling efficient traversal of the stored games in the database.
For every position p within the dataset, an evaluation is obtained. To accomplish this, the research incorporates the Stockfish engine, a widely recognized and powerful chess engine. Stockfish employs advanced algorithms and evaluation functions to assess the strength of positions. By leveraging the capabilities of Stockfish, the training dataset can determine the evaluation of each position p on the chessboard accurately.
Finally, this evaluation is associated with all states s ∈ p, resulting in a comprehensive dataset that encompasses both the state s and the evaluation associated with the position p from which s was derived. This dataset serves as the foundation for training the Neural Network, enabling it to learn and make informed decisions based on the provided state information.
§ KNIGHT AND BISHOP VALUATION
In this study, our proposed model is applied to a comprehensive dataset comprising over 2000 Grandmaster games. The primary objective is to predict the probabilities of winning w(s) and Centipawn evaluations c(s) for a specific subset of states, namely those denoted by { (c, p, sq) ∈ s : p ∈{Knight, Bishop}}. Although our focus is initially on the Knight and Bishop pieces, it is important to note that the model can be expanded to encompass all pieces, offering a broader analysis of the game.
To provide a visual representation of the predicted values, heat maps are generated for both w(s) and c(s) corresponding to each valid combination within the specified subset. These heat maps offer a comprehensive overview of the probabilities of winning and Centipawn evaluations associated with the Knight and Bishop pieces in different states.
To illustrate the efficacy of our model, we first employ it to predict the Centipawn evaluations c(s) specifically for states where the color c is White and the piece p is Knight or Bishop. The resulting predictions are showcased in Figure <ref> and Figure <ref>, providing valuable insights into the relative advantages or disadvantages of such states. Building upon this, we further use c(s) to derive the corresponding probabilities of winning w(s) for these specific states. The model-generated probabilities are visualized in Figure <ref> and Figure <ref>, offering a clear representation of the likelihood of White winning the game given the occurrence of the specified state s.
By leveraging our proposed model, we gain a deeper understanding of the dynamics of the game, specifically in relation to the Knight and Bishop pieces within the context of the White color. This analysis not only facilitates strategic decision-making but also provides a basis for evaluating the effectiveness of various gameplay approaches. Moreover, the model's expandability to encompass all pieces allows for a comprehensive examination of the game across different states, enabling us to uncover additional insights and enhance the overall understanding of chess strategies and gameplay dynamics.
The model is then used to determine c(s) and w(s) for states { (c, p, sq) ∈ s : c = "Black", p = "Knight", "Bishop"}, as can be seen in Figure <ref>, Figure <ref>, Figure <ref>, and Figure <ref> respectively.
Key squares for the Bishops can be seen in <ref>:
The applications of the model on Grandmaster games provide valuable insights into the dynamics and strategies employed by top-level chess players. By predicting the Centipawn evaluations c(s) and winning probabilities w(s) for specific subsets of states, we gain a deeper understanding of the advantages and disadvantages associated with different chess positions. These insights have several practical applications in chess analysis and gameplay evaluation.
The predictions generated by the model offer a quantitative measure of the advantage/disadvantage provided by the Knight and Bishop pieces in specific states. Heat maps depicting the predicted Centipawn evaluations c(s) and winning probabilities w(s) are presented for both White and Black knights and bishops. These visual representations provide a comprehensive overview of the relative strengths and weaknesses of these pieces in various positions.
By focusing on specific subsets of states, we can analyze the effectiveness of the Knight and Bishop pieces individually, as well as their contributions to the overall gameplay strategies employed by Grandmasters. This analysis aids in strategic decision-making, enabling players to assess the potential advantages or disadvantages associated with specific moves and piece configurations.
Furthermore, the expandability of the model allows for a comprehensive examination of the game across different states. By extending the analysis to include all pieces, we can uncover additional insights into the dynamics of the game and evaluate the effectiveness of various gameplay approaches. This broader perspective enhances our overall understanding of chess strategies and gameplay dynamics.
The predictions generated by the model can also be utilized for comparative analysis between different players or groups of players. By analyzing the Centipawn evaluations and winning probabilities associated with specific states, we can identify patterns and trends in the strategies employed by Grandmasters. This information can be leveraged to develop training materials and strategies for aspiring chess players, helping them improve their gameplay and decision-making abilities.
For example, in Figure <ref>, where w(s) represents the evaluation of the knight-square state, we can observe that the lowest values of w(s) are found in the white corners of the chessboard, specifically squares a1 and h1. This observation aligns with the widely held belief that knights are generally considered being in their worst positions when confined to the corners of the board.
The disadvantage of having a knight in the corner may stem from its limited mobility and restricted scope of influence. When placed in the corners, knights have fewer potential squares to reach and can easily become isolated from the central and more strategically significant areas of the board.
On the other hand, as the knights move closer to the opponent's side of the board, their positional value tends to increase. This is most likely due to the knights' ability to infiltrate enemy territory, potentially attacking key squares, pieces, or pawns.
The increasing value of knight-square states as the knights advance can be attributed to several factors. Firstly, the proximity to the opponent's pieces and pawns provides more targets for the knight's maneuvers and attacks. Secondly, knights positioned closer to the enemy's side can exert greater control over central squares and influence the dynamics of the game. This control can restrict the opponent's options and potentially create weaknesses in their position.
Analyzing the values of knight-square states in different positions on the board, such as the corners and closer to the opponent's side, supports the claim that the placement of knights significantly affects their effectiveness. Understanding the strengths and weaknesses associated with different knight positions helps players make informed decisions about piece placement, strategic plans, and tactical considerations. Key squares for the knight to occupy are marked in Figure <ref>.
The applications of our model on Grandmaster games provide valuable insights into the dynamics and strategies employed in high-level chess. The predictions of Centipawn evaluations and winning probabilities offer a quantitative measure of the advantages and disadvantages associated with specific chess positions, aiding in strategic decision-making and gameplay evaluation. The expandability of the model allows for a comprehensive analysis of the game across different states, facilitating a deeper understanding of chess strategies and enhancing the overall gameplay experience.
§.§ Magnus Carlsen
Our proposed model can be further applied to gain insights into the playing style and performance of specific players. In this section, we focus on the world-renowned chess player Magnus Carlsen, the reigning World Chess Champion. By applying our model to the games played by Carlsen, we aim to uncover unique patterns and characteristics that contribute to his success and distinguish his gameplay from other Grandmasters.
Our proposed model is applied to a dataset consisting of 2000+ Carlsen games played in the last 5 years. Similar to the previous section, we begin by predicting the Centipawn evaluations c(s) for states where Carlsen plays as the “White" color and utilizes the “Knight" or “Bishop" piece. These predictions provide valuable insights into the relative advantages or disadvantages of Carlsen's chosen states, shedding light on his strategic decision-making process. The resulting heat maps, showcased in Figure <ref>, Figure <ref>, Figure <ref>, and Figure <ref>, offer a visual representation of the predicted Centipawn evaluations for Carlsen's specific subset of states.
Building upon this analysis, we further utilize the Centipawn evaluations c(s) to derive the corresponding probabilities of winning w(s) for Carlsen's selected states. The model-generated winning probabilities provide a clear representation of Carlsen's likelihood of winning the game given the occurrence of the specified state s.
By focusing on Carlsen's gameplay, we gain a deeper understanding of his preferred strategies and tendencies when employing the Knight piece as the “White" color. This analysis allows us to assess the effectiveness of Carlsen's gameplay choices, providing insights into his decision-making process and potential areas of strength or improvement. Additionally, comparing Carlsen's results to the general dataset of Grandmaster games helps us evaluate his performance against the broader chess community.
The model is then used to determine c(s) and w(s) for states (c, p, sq) ∈ s : c = "Black", p = "Knight", "Bishop", as can be seen in Figure <ref>, Figure <ref>, Figure <ref>, and Figure <ref>, respectively.
The applications of the model on Magnus Carlsen's games provide valuable insights into the dynamics and strategies employed by one of the world's top chess players. By predicting the Centipawn evaluations c(s) and winning probabilities w(s) for specific subsets of states, we can gain a deeper understanding of the advantages and disadvantages associated with different chess positions in Carlsen's games. These insights have numerous practical applications in chess analysis and gameplay evaluation.
The predictions generated by the model offer a quantitative measure of the advantage/disadvantage provided by the Knight and Bishop pieces in specific states encountered by Magnus Carlsen. Heat maps depicting the predicted Centipawn evaluations c(s) and winning probabilities w(s) are presented for both White and Black knights and bishops in Carlsen's games. These visual representations provide a comprehensive overview of the relative strengths and weaknesses of these pieces in various positions as encountered by Carlsen.
By focusing on specific subsets of states in Carlsen's games, we can analyze the effectiveness of the Knight and Bishop pieces individually, as well as their contributions to Carlsen's overall gameplay strategies. This analysis aids in strategic decision-making, enabling players to assess the potential advantages or disadvantages associated with specific moves and piece configurations based on Carlsen's approach.
Furthermore, the expandability of the model allows for a comprehensive examination of the game across different states in Carlsen's games. By extending the analysis to include all pieces, we can uncover additional insights into the dynamics of the game as played by Carlsen and evaluate the effectiveness of various gameplay approaches employed by him. This broader perspective enhances our overall understanding of Carlsen's strategies and gameplay dynamics.
The predictions generated by the model can also be utilized for comparative analysis between Magnus Carlsen and other players. By analyzing the Centipawn evaluations and winning probabilities associated with specific states in Carlsen's games, we can identify patterns and trends in his strategies. This information can be leveraged to develop training materials and strategies for aspiring chess players, helping them improve their gameplay and decision-making abilities while considering Carlsen's approach.
In Figure <ref>, we discover the solution to one of the questions raised in Section <ref>: the value of the white knight on f5. Figure <ref> illustrates the distribution of c(s) for the White Knight on f5 in Carlsen's games. It is evident that the c(s) values for the White Knight exhibit a positive skew, indicating that this particular state s is typically associated with favorable c(s) values. Therefore, having a white knight positioned on f5 often confers an advantage.
By incorporating such insights into our analysis of Carlsen's games, we gain a more comprehensive understanding of the strengths, weaknesses, and strategic implications of the Knight and Bishop pieces as employed by Magnus Carlsen.
In sum, the applications of our model on Magnus Carlsen's games provide valuable insights into the dynamics and strategies employed by this world-class chess player. The predictions of Centipawn evaluations and winning probabilities offer a quantitative measure of the advantages and disadvantages associated with specific chess positions encountered by Carlsen, aiding in strategic decision-making and gameplay evaluation. The expandability of the model allows for a comprehensive analysis of Carlsen's games, facilitating a deeper understanding of his strategies and enhancing the overall gameplay experience.
§ PAWN VALUATION
No pawn exchanges, no file-opening, no attack—Aron Nimzowitsch
Our study is not complete until we apply the model to the mighty pawn. Our proposed model is applied to a comprehensive dataset comprising over 2000 Grandmaster games. The primary objective is to predict the probabilities of winning w(s) and Centipawn evaluations c(s) for a specific subset of states, namely those denoted by { (c, p, sq) ∈ s : p ∈{Pawn}}.
The results of the model when applied to the White Pawn are shown in Figure <ref> and Figure <ref>.
We note a few chess maxims that are reflected in the model predictions.
* Pawns gain in value as they cross the 4th rank: This point highlights an important principle in chess, where advancing pawns beyond the 4th rank often leads to increased positional strength and potential threats. As pawns move forward, they gain control over more squares, restrict the opponent's piece mobility, and open up lines for their own pieces. Crossing the 4th rank is a significant milestone that can significantly impact the dynamics of the game.
* Pawns on the h and a files are very good on the 5th rank: This point emphasizes the strategic importance of pawns positioned on the h and a files when they reach the 5th rank. Pawns on these files can have a powerful influence on the game, particularly in the endgame. Placing pawns on the 5th rank provides support for the central pawns, helps control key central squares, and may facilitate piece activity and potential attacks on the opponent's position.
* Pawns on the 6th rank are deadly, especially when supported by a pawn on the 5th rank: This point highlights the strength of pawns on the 6th rank, which is just two steps away from promotion. Pawns advanced to this rank become highly dangerous, as they pose a direct threat to promote to a more powerful piece. When supported by a pawn on the 5th rank, these pawns can create a formidable pawn duo, exerting significant pressure on the opponent's position and potentially leading to advantageous tactical opportunities.
* Edge pawns tend to be weaker than central pawns: This point draws attention to the relative weakness of pawns placed on the edges of the board (such as the a and h files) compared to pawns in central positions. Edge pawns have fewer potential squares to advance or support other pieces, limiting their mobility and influence. In contrast, central pawns control more critical squares, contribute to a stronger pawn structure, and have a greater impact on the overall game dynamics.
* Kingside pawns are more dangerous when advanced than queenside pawns: This point highlights a positional aspect where advancing pawns on the kingside (g and h files for White, g and h files for Black) can have a more immediate and aggressive impact compared to advancing pawns on the queenside (a and b files for White, a and b files for Black). Advanced kingside pawns can create open lines, potentially exposing the opponent's king to attacks or weakening their pawn structure. Understanding this distinction helps players assess the strategic implications of pawn advances on different sides of the board.
Important squares for the white pawn can also be seen by examining the highest Centipawn evaluation c(s) values in each column. By analyzing the rows in the heatmap corresponding to the white pawns, we can identify squares that consistently have high Centipawn evaluations, indicating their significance for white pawns.
Starting from the top row (from White's perspective), the squares with the highest c(s) values are e4, h4, c5, and h6. These squares represent critical positions for white pawns.
The square e4, located in the fourth row, is a well-known central square in chess. Occupying e4 with a white pawn can provide several advantages, such as controlling important central squares, supporting piece development, and establishing a strong pawn presence in the center.
Also in the fourth row, we find the square h4. Although it is on the edge of the board, it is an important square for white pawns. Placing a pawn on h4 can serve multiple purposes, including potentially supporting a kingside pawn storm, reinforcing control over the g5 square, or preparing to launch an attack on the opponent's position.
In the fifth row, we encounter the square c5. Occupying c5 with a white pawn can contribute to a solid pawn structure and provide control over central squares. It may also support piece mobility and influence the game's dynamics, particularly in the context of pawn breaks or central pawn exchanges.
Finally, in the sixth row, the square h6 stands out with the highest c(s) value. Placing a pawn on h6 can have strategic implications, such as potentially supporting kingside attacks or acting as a defensive shield for the king.
By identifying these squares with high c(s) values, we gain valuable insights into the strategic positioning of white pawns. These squares offer opportunities for central control, piece activity, attacking potential, and overall pawn structure. Understanding the significance of these squares helps players make informed decisions regarding pawn placement, pawn breaks, and strategic plans to maximize their advantage in the game.
We next apply this model to the black pawns. The results are shown in Figure <ref> and Figure <ref>.
Similar conclusions can be drawn for the black pawns. By analyzing the highest Centipawn evaluation c(s) values in each column for the black pawns, we can identify the key squares that consistently have high evaluations, signifying their significance for black pawns.
Just like for the white pawns, the rows in the heatmap corresponding to the black pawns reveal important squares. The squares with the highest c(s) values for black pawns are f5, d5, c4, d3, and f3. These squares play a crucial role in determining the strength and strategic positioning of the black pawns.
The square f5, located in the fifth row, emerges as one of the critical squares for black pawns. Placing a pawn on f5 can provide black with control over central squares, potential support for piece development, and opportunities for counterplay.
The square d5 stands out with a high c(s) value. Occupying d5 with a black pawn contributes to central control, potentially restricts white's pawn breaks, and provides a solid foundation for black's pawn structure.
In the fourth row, the square c4 is identified as an important square for black pawns. Occupying c4 can offer black strategic advantages, such as central control, potential support for piece activity, and the creation of tactical opportunities.
Furthermore, the square d3 in the third row holds significance for black pawns. Placing a pawn on d3 strengthens black's central presence, potentially restricts white's pawn advancements, and helps solidify black's position in the center.
Lastly, the square f3 in the third row also demonstrates a high c(s) value. Occupying f3 with a black pawn can support kingside counterplay, potentially restrict white's piece mobility, and offer opportunities for tactical operations.
Analyzing these key squares for black pawns, namely f5, d5, c4, d3, and f3, provides valuable insights into the strategic considerations and potential strengths of the black pawn structure. Occupying and controlling these squares strategically enhances black's control of central areas, supports piece coordination, and enables counterplay against white's position.
By understanding the significance of these squares, players can make informed decisions regarding pawn placement, pawn breaks, and strategic plans to maximize their potential advantage and navigate the complexities of the game from the black perspective.
§ DISCUSSION
In this paper, we presented a comprehensive methodology for evaluating chess positions and predicting the probabilities of winning w(s) and Centipawn evaluations c(s). Our approach utilized a combination of Centipawn evaluation, Q-learning, and Neural Networks to capture the complex dynamics of the game and facilitate informed decision-making.
We began by formalizing the theoretical functions used in Q-learning, such as the value function V(s) and Centipawn evaluation c(s). The value function represented the probability of winning the game given a specific state s, while the Centipawn evaluation measured the advantage in chess. We derived the win probability w(s) from the Centipawn evaluation using a mathematical equation.
To address the sequential decision problem, we employed the dynamic programming technique of Q-learning, which involved breaking down the problem into smaller sub-problems and solving the Bellman equation. The Q-value matrix represented the probability of winning given a policy/move in a specific state, and we determined the optimal policy and value function using the Q-values.
To predict Centipawn evaluations c(s), we designed a Neural Network architecture specifically tailored for chess positions. This model incorporated the tanh activation function to capture intricate patterns and relationships within the input data. By training the Neural Network on a meticulously crafted dataset, we could make accurate predictions of Centipawn evaluations for unseen states.
Our methodology expanded upon previous work by considering a comprehensive state representation that encompassed color, piece type, and square information. This allowed for a more nuanced analysis of the game dynamics and a deeper understanding of the factors influencing the outcome. We also showcased the applications of our model, focusing on specific subsets of states, such as the Knight and Bishop pieces, and visualizing the predicted probabilities of winning and Centipawn evaluations through heat maps.
Further research in this area could explore the dynamic nature of square values, taking into account positional changes and the interaction between different pieces. By refining and expanding our methodology, we can continue to deepen our understanding of the intricate dynamics of chess positions and contribute to advancements in the field of chess AI.
In conclusion, our methodology provides a robust framework for evaluating chess positions and making informed decisions during gameplay. By combining Centipawn evaluation, Q-learning, and Neural Networks, we achieved a comprehensive analysis of the game dynamics and enhanced our ability to assess strategic moves and guide decision-making. Our research contributes to the development of more sophisticated and intelligent Chess AI systems, paving the way for deeper insights into the intricacies of the game.
With our methodology, we strive to unravel the logical relations of chess and provide a comprehensive understanding of the game, empowering players and researchers alike to unlock new levels of strategic thinking and mastery.
plainnat
|
http://arxiv.org/abs/2307.07650v1 | 20230714225552 | SALC: Skeleton-Assisted Learning-Based Clustering for Time-Varying Indoor Localization | [
"An-Hung Hsiao",
"Li-Hsiang Shen",
"Chen-Yi Chang",
"Chun-Jie Chiu",
"Kai-Ten Feng"
] | cs.LG | [
"cs.LG",
"cs.AI",
"eess.SP"
] |
SALC: Skeleton-Assisted Learning-Based Clustering for Time-Varying Indoor Localization
An-Hung Hsiao, Li-Hsiang Shen^†, Chen-Yi Chang, Chun-Jie Chiu, and Kai-Ten Feng
Department of Electrical and Computer Engineering
National Yang Ming Chiao Tung University, Hsinchu, Taiwan
^†California PATH, Institute of Transportation Studies, University of California at Berkeley, Berkeley, USA
Email: [email protected], [email protected], [email protected], [email protected], [email protected]
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================
Wireless indoor localization has attracted significant amount of attention in recent years. Using received signal strength (RSS) obtained from WiFi access points (APs) for establishing fingerprinting database is a widely utilized method in indoor localization. However, the time-variant problem for indoor positioning systems is not well-investigated in existing literature. Compared to conventional static fingerprinting, the dynamically-reconstructed database can adapt to a highly-changing environment, which achieves sustainability of localization accuracy. To deal with the time-varying issue, we propose a skeleton-assisted learning-based clustering localization (SALC) system, including (), (), and (). The SALC scheme jointly considers similarities from the skeleton-based shortest path (SSP) and the time-varying RSS measurements across the reference points (RPs). clusters RPs into different feature sets and therefore selects suitable monitor points (MPs) for enhancing location estimation. Moreover, the algorithm aims for establishing adaptive fingerprint database to alleviate the time-varying problem. Finally, is adopted to acquire the target position by leveraging the benefits of clustering information and estimated signal variations in order to rescale the weights from weighted k-nearest neighbors (WkNN) method. Both simulation and experimental results demonstrate that the proposed system can effectively reconstruct the fingerprint database with an enhanced location estimation accuracy, which outperforms the other existing schemes in the open literature.
Wireless indoor localization, clustering, time-varying, machine learning, neural networks.
§ INTRODUCTION
For decades, the emerging location-based services (LBSs) have been promoted by telecom operators which significantly relies on acquiring the position of user equipment (UE) or target devices <cit.>. There exist abundant techniques to be adopted for LBS, such as global positioning system (GPS) <cit.>, passive infrared (PIR) sensors <cit.>, WiFi <cit.>. LBS can be adopted in a variety of contexts including indoor/outdoor localization <cit.> and human presence detection <cit.>. Nowadays, as the life-oriented demands in public areas soar, indoor LBS capable of locating a particular person or monitoring the people flow has received considerable attention. However, GPS is not suitable for indoor LBS since their signals suffer from severe environmental degradation, including scattering and blockage, which leads to unpredictably low positioning accuracy. Therefore, short-range signal source such as WiFi becomes a potential candidate to be utilized in a complex indoor environment.
WiFi-based localization system is widely applied in indoor positioning based on WiFi access points (APs) and portable devices using received signal strength (RSS) as information inference. The RSS is the signal information related to path-loss distance between the transmitter and receiver, which can be readily obtained from WiFi APs. Fingerprinting <cit.> is a widely-adopted positioning algorithm based on information of APs, which typically contains both offline measurement and online estimation phases. In the offline phase, the information is measured and collected at pre-defined locations so-called reference points (RPs) to establish database consisting of RSS from APs and geometric locations of RPs. During the online phase, real-time RSS will be received and matched to those from RPs in the offline-established database to estimate the target position with the aid of signal similarity features. In <cit.>, the authors utilize complex channel information to overcome the problems, such as data loss, noise and interference in the fingerprint database and laborious offline training. The authors in <cit.> have proposed a solution for alleviating frequent data collection and improve privacy in two specifically defined scenarios. Hence, fingerprinting possesses lowered computational complexity and is capable of reflecting the multipath effects of non-line-of-sight in indoor environments <cit.>. Moreover, the weighted k-nearest neighbor (WkNN) algorithm <cit.> is employed to locate the desirable device during the online phase, which is calculated based on the k largest weights among all RPs. Note that the weight implies the difference between real-time user's RSS and offline measured one in database. Consequently, it becomes important to investigate the factors disturbing RSS values which can result in acquiring faraway incorrect target location with similar RSS.
However, the RSS will also be affected by time-variation and human blockages, the authors in <cit.> consider the relationship between RSS of monitor points (MPs) and of RPs collected in offline phase to construct an artificial neural network for position estimation. During the online phase, the real-time RSS values at RPs are predicted based on the collected data at MPs. In general, the intention of adopting MPs is to observe the environmental changes in specific areas by continuously collecting signal information. The MPs detect the variation of RSS so that it can provide immediate information to reconstruct a more precise real-time radio map. Moreover, the function of MP can be embedded into smartphones or mobile devices, which is considered relatively low-cost to monitor the RSS. Nevertheless, inappropriate deployed locations of MPs cause insufficient information for radio map establishment. Therefore, it is essential to design feasible schemes to cluster the RPs into several groups and to select the cluster head as MPs based on available signal sources and map information <cit.>. The authors in <cit.> introduce the affinity propagation clustering algorithm which can deal with the RP clustering problem. The affinity propagation clustering process exchanges similarity by utilizing the responsibility and availability messages among the nodes in each iteration. Note that the responsibility message quantifies how well-suited the node serves as the exemplar to the other nodes; while availability message represents how appropriate the node selects the other node as its exemplar. After the clustering process, the nodes will be divided into several clusters and choose their own cluster head as the exemplar. The advantage of affinity propagation clustering is that the process only requires the similarity among data and is unnecessary to pre-define the number of clusters in most cases. The number of clusters can also be constrained by adjusting specific parameters. Hence, we can choose appropriate MPs from the RPs based on affinity propagation clustering by leveraging a designed similarity function with the aid of useful information.
In this paper, to properly determine the locations of MPs, the proposed () algorithm is designed based on affinity propagation clustering. With the aid of ML-enhanced techniques, () is proposed to reconstruct the fingerprint database for solving the time-variation problem without recollecting RSS information of RPs. Moreover, we propose a () algorithm to further improve the position estimation accuracy by computing adaptive weights based on the RSS's signal variance and cluster information. Unlike conventional WkNN, can localize the user accurately without selecting the farther RPs as candidates during the online phase. The contributions of this paper are summarized as below.
* We propose a () system that optimizes RP clustering and MP selection in ROMAC, fingerprinting database reconstruction in CODE, and user positioning in CsLE in time-varying indoor environments.
* utilizes affinity propagation to cluster RPs associated with the selected MPs as cluster heads. We jointly consider RSS differences, geometric relationship of RPs, and time-variation effect. Note that the clustering information from is utilized in both and for database reconstruction and accurate positioning.
* The proposed algorithm aims for solving the time-variation issue by utilizing linear regression and neural network techniques to generate adaptive database for real-time radio maps. aims for user positioning based on and , which considers RSS variance from APs. It avoids noisy RSS signals and infeasible faraway RPs as candidate locations.
* Simulation results show that proposed appropriately clusters the RPs as well as chooses the MP cluster head. can efficiently reconstruct a real-time radio map to solve time-variation. While, reaches a higher localization estimation accuracy than existing methods. Moreover, real-time experiments are conducted to validate the effectiveness of our proposed system, with a higher positioning accuracy.
The remaining of the paper is organized as follows. In Section <ref>, we have investigated the related works. In Section <ref>, the system architecture and flowchart of are demonstrated. The proposed , , and schemes are elaborated in Section <ref>. In Section <ref>, the simulation and experimental results are discussed in detail. Conclusions are drawn in Section <ref>.
§ RELATED WORK
Indoor localization has been an interesting research for decades, conventional RSS-based fingerprinting for indoor positioning is studied in <cit.>. There exist critical bottlenecks restricting its large-scale implementations which consist of time-consuming and labor-intensive data collection process under severe wireless environmental influence, e.g., RSS suffers from dynamic environmental changes such as temperature, shadowing, and obstacles. In <cit.> and <cit.>, the authors have provided a comprehensive survey of existing fingerprinting methods and dynamic update techniques for radio maps. However, they did not consider the concept of MP deployment, which is capable of efficiently updating the radio map by selecting the appropriate MPs. As a result, online RSS measurement may significantly deviate from the fingerprint database established in the offline phase. However, the RSS will also be affected by time-variation and human blockage, which are not jointly considered in conventional schemes. Furthermore, static fingerprint database may be unreliable, which requires repeated data collection to maintain a satisfactory positioning accuracy. The works of <cit.> have conceived different methods to solve the above-mentioned problems. For example, the authors in <cit.> estimate the RSS at non-site-surveyed positions and utilize the support vector regression (SVR) to improve the resolution of the radio map. Existing researches calibrate the database mainly based on either distance-based path-loss models or interpolation method. However, the paper of <cit.> has shown that it is difficult to adopt signal loss models in the complex and time-varying environments.
Some existing works intend to modify the classical WkNN to improve localization accuracy by considering different factors <cit.>. <cit.> has proposed a feature-scaled WkNN, depending on the observation of different RSS values and distinguishability in geometrical distances. However, solely taking either RSS or variance <cit.> into consideration is potentially insufficient to precisely estimate the user position in sophisticated indoor wireless environments. Therefore, the authors in <cit.> have proposed a new weighted algorithm based on geometric distance of the RPs, and authors in <cit.> also proposed a restricted WkNN by considering indoor moving constraints to reduce spatial ambiguity. However, the RSS will also be affected by time-variation and human blockage, which is not jointly considered in the conventional schemes. Consequently, it becomes important to investigate the factors disturbing RSS values, which can result in acquiring faraway incorrect target locations with similar RSS.
Under such issues of complex indoor environments and the nonlinearity of radio map caused by time-varying effect, the state-of-the-art machine learning (ML) technique is capable of intelligently estimating user position and of dynamically establishing fingerprint database in an effective and efficient manner. The works in <cit.> utilize different linear regression methods to calibrate the received signals in order to alleviate the variation of RSS. The works <cit.> have proposed linear regression-enhanced methods to update the online radio map based on the observation of real-time received signals. In <cit.>, the authors have applied transfer learning to realize adaptive database construction with the aid of the arbitrarily deployed MPs. The authors in <cit.> have utilized the federated learning to address dynamic and heterogeneous data streams in indoor localization. In <cit.>, time-varying effect is taken into account with the aid of teacher-student learning. However, it requires laborious data collection as well as site-specific training. In <cit.>, they both require a much more complex neural network and training mechanism as well as high-dimensional data, which may lead to time-consuming process. Moreover, the existing works did not jointly consider the time-variation and human blockage effects in the online phase. We propose the SALC to address the above-mentioned problem, which will be introduced in the following section in detail.
§ SYSTEM ARCHITECTURE AND PROBLEM FORMULATION
<ref> illustrates the WiFi network scenario of indoor localization. In the proposed SALC system, we jointly consider RP clustering, MP deployment, adaptive fingerprinting database reconstruction, and user position prediction based on the RPs' time-varying RSS and geometrical information, which have not been considered in existing literature. The network is deployed with N_ap APs, N_rp RPs and N_mp MPs, where N_mp will be determined in the proposed SALC scheme in the next section. During the offline phase of fingerprinting, the measured RSS α_n,l^RP(t) is received from the l^th AP on the n^th RP at the t^th time instant, where the location of the corresponding RP is also recorded. The offline database containing collected RSS measurements from all RPs is represented as
α_l^RP(t) = [ α_1,l^RP(t), … , α_n,l^RP(t), … , α_N_rp,l^RP(t) ]^⊤,
where l ∈ [1,N_ap] and n ∈ [1,N_rp] are the indexes of APs and RPs, respectively, and ⊤ is defined as transpose operation. Note that α_n,l^RP(t) is estimated based on the path-loss model specified in <cit.>, which is expressed as
α_n,l^RP(t) = P_t - 20log(f)+P_dlog(d_n,l)-28,
where P_t is the transmit power, f is the operating frequency, P_d is the path-loss coefficient depending on indoor environments, and d_n,l is the distance between the n^th RP and the l^th AP. Note that α_n,l^RP(t) is in the unit of dB. The locations of MPs will be determined among RPs as cluster heads considering the similarity among RPs with respect to time, RSS, and geometric distance, which aim for monitoring the time-varying RSS at time t in its own cluster. The measurement of MP's RSS can be given by
α_l^MP(t) = [ α_1,l^MP(t), … , α_m,l^MP(t), … , α_N_mp,l^MP(t) ]^⊤,
where m ∈ [1,N_mp] is the index of MPs and α_m,l^MP(t) is estimated based on the path-loss model the same as that in (<ref>). In the online phase, the adaptive fingerprinting database will be generated by applying both real-time received RSS from MPs and offline established RSS database. Therefore, the user's position can be estimated by matching its real-time RSS α^U_l(t) to that in the generated fingerprinting database.
The flowchart of the proposed system is shown in <ref>, which is composed of three sub-algorithms, including , , and . The main target of is to deal with the time-variation issues in conventional fingerprinting database resulting in incorrect radio map matching, which consequently incurs a lower location estimation accuracy. By monitoring the environmental changes with the aid of MPs, the distribution of α^RP_n,l(t) can be well-estimated to establish the adaptive RP database as
p(α^RP_n,l(t_o))=p(α^RP_n,l(t)|α^MP_m,l(t_o)),
where t_o indicates the measurement time step in online phase. We need to obtain the RP's RSS value α^RP_n,l(t) that maximizes the distribution p(α^RP_n,l(t_o)) on each RP. Therefore, based on (<ref>), we can acquire more precise RSS on each RP which is formulated as
α^RP_l(t_o) = M(α^MP_l(t_o)) =_α^RP_n,l(t)p(α^RP_n,l(t)|α^MP_m,l(t_o)),
where M ( α^MP_l(t_o) ): ℝ^N_mp→ℝ^N_rp indicates the mapping function from MPs to RPs. We can observe from (<ref>) that it leads to a non-linear optimization problem due to the sophisticated indoor environment with ample multi-paths and blockages. The proposed in the upper-left of <ref> is adopted to select the MPs in (<ref>) to provide instant information of α^MP_m,l(t_o). Furthermore, the proposed in the lower-left of <ref> is designed to generate the distribution of α^RP_n,l(t_o) by employing deep neural networks in order to solve the non-linear estimation problem in (<ref>). We can then obtain the estimated user position x̂(t_o)=[x̂(t_o), ŷ(t_o)]^⊤ according to the received RSS α^U(t_o) at its real position x=[x, y]^⊤, which is expressed as
x̂(t_o)=_∀xp(x|α^U(t_o)),
where α^U(t_o)=[α^U_1(t_o),…,α^U_l(t_o),…,α^U_N_ap(t_o)]. The localization problem (<ref>) is equivalent to the following formula as
p(x|α^U(t_o)) = p(α^U(t_o)|x)p(x)/p(α^U(t_o)),
where the posterior distribution p(x|α^U(t_o)) can be derived based on Bayes' theorem <cit.>. The probability p(x) follows the uniform distribution, which therefore can be ignored due to its proportionality property. Moreover, the distribution of RSS α^U(t_o) is attainable because RSS is collected from the mobile device. Accordingly, maximizing likelihood p(α^U(t_o)|x) is equivalent to maximizing the radio map mapping function R(x): ℝ^N_rp→ℝ^2 which can be represented as
R(x)=_α^U(t)p(α^U(t)|x)≈∑_n=1^N_rpα^RP_n,l(t)δ(x-x_n),
where δ(·) is a delta function with value equal to 1 if x=x_n, and x_n is the location of the n^th RP. The radio map R(x) is approximated from the summation of updated parameter α^RP_n,l(t) from (<ref>).
§ PROPOSED SALC SYSTEM
The proposed system provides location estimation with the employment of fingerprinting and received RSS values from MPs. However, the data collection will suffer from the non-linear time-variation issue during both offline and online phases. To deal with this issue, it becomes crucial to design a non-linear MP-aided mapping function in order to reconstruct the real-time radio map. Therefore, we propose three sub-algorithms including , , and in the system to solve the above-mentioned issues, i.e., RPs clustering, MPs selection, fingerprinting database reconstruction, and user positioning. The is mainly designed to divide the RPs into clusters and select corresponding MPs, whereas is adopted to generate the distribution of α^RP_l(t) in order to solve the non-linear problem in (<ref>). The proposed scheme aims for estimating user's position according to the adaptive database constructed by and the clustering information obtained from .
§.§ ()
The proposed ROMAC scheme is designed based on an unsupervised learning approach, which can classify the unlabeled data into different groups based on attainable RSS features. As mentioned in Section <ref>, the existing methods for the MP deployment have not jointly considered the signal strength, map information and time-varying effect. In this subsection, we will introduce how jointly considers all important factors, including RSS measurements, skeleton-based shortest path (SSP), and time-variation characteristics to conduct the clustering process. Furthermore, deployment of MPs is also determined by selecting the cluster head. is designed based on the affinity propagation <cit.>, which only requires the similarity feature among RPs, without the need to pre-define the number of clusters. The self-defined similarity consists of three key factors including the amplitude difference of RSS among RPs, layout of RPs, and time-variance of RSS. The amplitude difference of RSS represents potential signal fading and blockages. The indoor layout provides the position knowledge for the SSP, whilst the time-variance of RSS reflects the time-varying effect of signals. Based on the observed received signal α_l^RP(t), the difference of RSS between the i^th and j^th RPs is defined as
d_RSS(i,j) = 1/N∑_k=1^N|α_i,l^RP(t_k)-α_j,l^RP(t_k) |,
where N is the considered time interval, t_k is the time index for k ∈ [1, N] and the notion of |x| represents the absolute value of x. Notice that the difference between the i^th and j^th RP's RSS is related to its Euclidean distance, where a smaller value of d_RSS(i,j) represents a higher similarity level between RPs.
<ref> shows an exemplified layout of an indoor environment, where the white area is walkable area and the black lines are the walls. The red points are RPs to be estimated and blue lines and vertices are formed by the SSP scheme adopted from <cit.> to acquire the spatial skeleton, which provides a compact map of the shortest paths among RPs. The vertices of SSP-skeletons are calculated based on generalized Voronoi diagram (GVD) technique <cit.> as an enhancement of conventional Voronoi diagram in order to partition a plane into several cell-like regions. The output of GVD contains edges and vertices as demonstrated in blue lines and points in <ref>, respectively. We define D as an SSP matrix which is given by D = [d_v,w], ∀ v,w∈ [1,N_v], where d_v,w is the shortest path from the v^th to w^th vertices, and N_v is the total number of vertices. Therefore, as shown in <ref>, the spatial distance of RPs can be derived from the shortest path between the i^th and the j^th RPs as
d_SSP(i,j) = d(g_i,v_i) + d_i,j + d(g_j,v_j),
where i,j ∈ [1,N_rp] are the indexes of RPs, v_i and v_j are respectively the nearest vertex of the i^th and the j^th RP. g_i and g_j are the positions of i^th and j^th RPs, respectively. d(g_i,v_i) denotes the Euclidean distance between i^th RP and vertex v_i, whilst d_i,j denotes the shortest path between the i^th and j^th vertices. Therefore, d_SSP(i,j) reflects the spatial relationship between the i^th and j^th RPs in an SSP-aided layout. Note that smaller value of d_SSP(i,j) indicates higher spatial relationship.
Moreover, the long-term difference of RSS time-varying between empty and crowded areas is considered. The difference between the i^th and the j^th RP is derived from
δ_i,j=|∑_l=1^N_ap∑_k=1^N[α_i,l^RP(t_k)-α_i,l^RP(t_e)]-[α_j,l^RP(t_k)-α_j,l^RP(t_e)]|^-1,
where α_i,l(t_e) is RSS measured on the i^th RP from l detectable APs which is considered as a reference RSS obtained at the time instant t_e, e.g., RSS acquired from an empty area. The parameter δ_i,j reflects the distinct difference of time-varying effect between the i^th and j^th RPs. Notice that a higher value of δ_i,j indicates that the RPs are less susceptible to time-variation effect. For example, consider the case that δ_1,2 = 1/(11-1)=0.1 with [α_1,l^RP(t_k)-α_1,l^RP(t_e)]=11 and [α_2,l^RP(t_k)-α_2,l^RP(t_e)]=1; while δ_3,4 = 1/(2-1)=1 with [α_3,l^RP(t_k)-α_3,l^RP(t_e)]=2 and [α_4,l^RP(t_k)-α_4,l^RP(t_e)]=1. It can be intuitively observed that the area around RPs 1 and 2 with δ_1,2 =0.1 is more susceptible to time-varying effect, e.g., a crowded room, comparing to that for RPs 3 and 4 with δ_3,4 = 1, e.g., an empty room. The RPs that are highly influenced by time-variation, i.e., smaller δ_i,j will be treated with larger similarity, since the proposed MPs are designed to resist time-varying effect.
According to RSS, SSP and the time-varying effect in (<ref>), (<ref>) and (<ref>), respectively, the joint similarity s_i,j between the i^th and the j^th RPs can be formulated, which is also shown at top-left of <ref>, as
s_i,j = - ω·[d_RSS(i,j),d_SSP(i,j),δ_i,j]^⊤,
where ω = [ω_RSS, ω_SSP,ω_δ] represents the important weights of RSS, SSP and time-varying effect. Notice that we impose the negative sign on the factors in (<ref>) to reflect smaller values of those three factors resulting in larger joint similarity. We denote S_j as the joint similarity between the j^th RP and the others, which is defined as
S_j = [s_1,j,… s_i,j, …, s_N_rp,j],
where ∀ i j, and i,j ∈ [1, N_rp]. Furthermore, we define the preference to represent the self-similarity of the j^th RP as
s_j,j = M_d(S_j),
where M_d(·) is the median function averaging all elements of a matrix. The preference indicates the probability of a specific RP becoming a cluster exemplar, i.e., it is selected as the corresponding MP. Accordingly, a lower preference value of an RP indicates that it behaves similar to the other RPs, which means that it possesses a lower chance to be selected as a cluster exemplar.
Based on the SSP-aided map and similarity definition, the proposed considers two types of messages among RPs including responsibility message r_i,j and availability message a_i,j ∀ i,j ∈ [1,N_rp]. The message will be exchanged iteratively in order to derive the prioritized responsibility and availability messages. The responsibility r_i,j is sent from the i^th RP to the j^th RP, which is defined as
r_i,j = s_i,j - max_j' ≠ j{a_i,j'+s_i,j'},
where max{·} function gives the maximum value among input elements. The availability a_i,j is then sent from the j^th RP to the i^th RP represented by
a_i,j =
min{0,r_j,j+∑_i' ≠ i,jmax{0,r_i',j}}, ∀ i≠ j,
∑_i' ≠ jmax{0,r_i',j}, ∀ i=j,
where max{·,·} will choose a larger element as the output and min{·,·} will choose a smaller one. (<ref>) and (<ref>) show that higher r_i,j representing that the j^th RP is more appropriate to serve as the candidate exemplar for the i^th RP, whereas higher a_i,j indicates that the i^th RP has higher tendency to select the j^th RP as its exemplar. The exemplar will be determined after both responsibility r_i,j and availability a_i,j matrix are updated. Consequently, the j^th RP with the maximum value of (r_i,j+a_i,j) will be selected as the cluster exemplar MP. The set of exemplars Ψ can be derived as
Ψ = {μ | μ = _∀ i,j ∈ [1,N_rp]{r_i,j+a_i,j}}.
Notice that the number of MPs can therefore be determined as N_mp = rank(Ψ) according to the proposed ROMAC scheme. Furthermore, for the i^th RP, its exemplar MP is selected from the exemplar set Ψ as
E(i)= _j ∈Ψ{r_i,j+a_i,j}.
Alternatively, the m^th RP as the i^th RPs' exemplar MP is defined as the set of
C(m)= {i|E(i)=μ},
where i ∈ [1,N_rp] is the index of RP in the m^th cluster. Note that the parameter μ denotes the selected exemplar MP's actual RP's index number, whilst m is defined as the re-ordered index of μ for the ease of representation in the following design. For example, consider the case that Ψ={μ | μ = 6,14,22,34}, the first cluster's exemplar MP with m=1 becomes C(1)={i|E(i)=6}; while the second MP with m=2 is C(2)={i|E(i)=14}. The iterations will be executed until the cluster set C(m) becomes unchanged. As illustrated in <ref>, both the MP set Ψ in (<ref>) and the cluster set C(m) in (<ref>) will be utilized as the inputs of the following scheme. The concrete procedure of is provided in Algorithm <ref>.
§.§ ()
In the proposed CODE scheme, both the linear regression (CODE-LR) and neural network (CODE-NN) schemes are designed in <ref>. CODE has addressed expired fingerprinting database issue caused by the time-varying effects. The radio map information can be predicted by adopting either regression or neural network models acting as the database generator. The proposed scheme acquires the distribution in (<ref>) considering the linearity property of signal strength, i.e., RSS is approximately inversely proportional to the distance between the transmitter and receiver in a linear manner. To further account for nonlinear effects caused by indoor signal fading or moving objects, the conceived scheme improves the accuracy of database prediction by extracting latent information from a deep neural network model.
§.§.§
The proposed algorithm is designed based on SVR <cit.> to conduct online database construction. We intend to develop a predictive modeling technique investigating the relationship between dependent and independent variables to represent the measured RSS on MPs and RPs, respectively. The regression models of RPs will be trained considering the long-term RSS information in the offline database construction phase. During the online phase, the coefficients of regression models are extracted to establish the fingerprint database for online matching process. Note that the overhead of time-consuming offline database construction can therefore be reduced with the assistance of online adjustment to maintain satisfactory positioning accuracy of .
Under time-varying environment, the RSS of each RP varies individually when environment changes. However, with the aid of cluster information C(m) and deployed MPs Ψ obtained from , we are capable of observing real-time RSS value. We notice that RPs in certain cluster behave similarly to their corresponding cluster exemplar MP, where the similarity pattern can be acquired via proposed . Considering that RP n ∈ C(m) and MP m ∈Ψ represent the cluster head of RPs n ∈ C(m), the estimated RSS α̂^RP_n,l(t_k) from the l^th AP can be calculated by
α̂_n,l^RP(t_k) = c_n,l^Rα_m,l^MP(t_k) + b_n,l^R,
where c_n,l^R and b_n,l^R are coefficient and bias, respectively, of regression in . The loss function is modeled as
J_n,l = 1/2∑_k=1^N(α̂_n,l^RP(t_k) - α_n,l^RP(t_k))^2.
To iteratively update the weights of regression, the stochastic gradient descent is adopted as
b_n,l^R = b_n,l^R - η∂ J_n,l/∂ b_n,l^R
= b_n,l^R - η[α̂_n,l^RP(t_k) - α_n,l^RP(t_k)],
c_n,l^R = c_n,l^R - η∂ J_n,l/∂ c_n,l^R
= c_n,l^R - η[α_m,l^MP(t_k)·(α̂^RP_n,l(t_k) - α_n,l^RP(t_k))],
where η is the learning rate. The iteration will execute until the coefficient remains unchanged. Notice that we build a regression model for every RP, and the complete coefficient and bias sets can be represented as c^R={c_n,l^R |∀ n ∈ [1,N_rp],∀ l ∈ [1,N_ap] } and b^R={ b_n,l^R |∀ n ∈ [1,N_rp],∀ l ∈ [1,N_ap] }. These two parameter sets will be obtained from the model training process of CODE-LR, as shown in the bottom-left part of <ref> and saved in the offline model parameter database.
Furthermore, during the online phase, the online database generation will compute real-time RSS for RPs α̂_n,l^RP(t_o) as shown in the bottom-right part of <ref> as
α̂_n,l^RP(t_o) = c_n,l^Rα_m,l^MP(t_o) + b_n,l^R,
where t_o denotes the time instant at online phase. c_n,l^R and b_n,l^R are respectively acquired from c^R and b^R in model parameter database at the offline stage. Consequently, the RSS values from all RPs α̂_n,l^RP(t_o) are computed and stored in the adaptive RP database in order to reconstruct the real-time radio map, which will be utilized for location estimation.
§.§.§
The linear relationship between the RSS values of MPs and RPs considered in may be impractical in realistic system due to complicated indoor wireless environments. Therefore, scheme is proposed to perform nonlinear training and mapping. The training process is divided into pre-training and fine-tuning phases. In the pre-training phase, the loss function and back-propagation will be computed by a cluster-averaged target in order to take cluster information into account to represent various similarity levels between RPs. In the fine-tuning phase, the pre-trained parameters will be loaded as initial parameters associating with the unaveraged target function in order to guarantee feasible prediction results.
First of all, data preprocessing as shown in block of <ref> is applied in order to consider the time-varying effects. With the same cluster's RP n ∈ C(m) and MP m ∈Ψ, we subtract RSS from the l^th AP in the initial environment α_n,l^RP(t_e) from that at time t_k, i.e., α_n,l^RP(t_k), to obtain the time-variant difference as
δ_n,l^RP(t_k) = α_n,l^RP(t_k) - α_n,l^RP(t_e).
Similarly, the difference of time-varying RSS obtained at the m^th MP δ^MP_m,l(t_k) can be defined as
δ_m,l^MP(t_k) = α_m,l^MP(t_k) - α_m,l^MP(t_e).
<ref> shows the network architecture of including the pre-training and fine-tuning phases. The network is constructed by the input and output layers with dimensions N_mp and N_rp, respectively. Five fully connected neural network (FCN) layers are chosen as the hidden layers with the corresponding number of network nodes. Notice that the RSS difference δ^MP_m,l(t_k) acquired from all N_mp MPs will be served as the inputs to FCN to provide the learning mechanism with an integrated fashion. The model output δ̂^RP_n,l(t_k) can be computed by the forward propagation from hidden layers shown as the green part in <ref>, which is represented as
δ̂^RP_n,l=f^(5)(∑^D_5_m=1… f^(1)(∑^D_1_m=1w^(1)_n,mδ^MP_m,l+b^(1)_n)+b^(5)_n),
where f^(h)(·) is the activation function and D_h is the number of nodes of the h^th hidden layer for h=1 to h=5. The dimension of hidden layer's weight w^(h)_n,m is D_h× D_h-1 for h=2 to 5 and D_1× N_mp for h=1, whilst that for the bias b^(h)_n is D_h×1 for all h.
In the pre-training phase, the goal is to provide initial parameter of every node, which can help the model find the solution rapidly, i.e., to reduce the iterations of training process. After the data preprocessing in <ref>, the RSS difference δ^RP_n,l(t_k) at the k^th time instant from (<ref>) are averaged within each cluster since the RPs in the same cluster have similar trend in time-variation. With the aid of cluster information Ψ and C(m), the averaged target can be represented as
δ̅^RP_n,l(t_k) = 1/N_m∑_n=1^N_mδ_n,l^RP(t_k),
where N_m is the number of RPs in the m^th cluster and n is the index of RP, ∀ n ∈ C(m). Compared to utilizing δ_n,l^RP(t_k), choosing δ̅^RP_m,l(t_k) in (<ref>) as the target in pre-training phase can reduce computational complexity of loss function. Therefore, the loss function is designed between the target δ̅^RP_n,l(t_k) and the output δ̂^RP_n,l(θ_p^τ;t_k) as
L_γ(δ̅^RP_n,l(t_k),δ̂^RP_n,l(θ_p^τ;t_k))
={ 1/2[δ̅^RP_n,l(t_k)-δ̂^RP_n,l(θ_p^τ;t_k)]^2, |δ̅^RP_n,l(t_k)-δ̂^RP_n,l(θ_p^τ;t_k) | ≤γ,
γ|δ̅^RP_n,l(t_k)-δ̂^RP_n,l(θ_p^τ;t_k)-1/2γ^2 |, ,
.
where δ̂^RP_n,l(θ_p^τ;t_k) denotes the model output predicted by the parameter at the τ^th training iteration during the pre-training phase. We select Huber loss <cit.> as the loss function in order to eliminate the effects of outliers by setting the threshold γ. The gradient descent method is utilized to search the optimum of the loss function during back propagation as shown in <ref>. The gradient can be updated iteratively as
θ_p^τ+1=θ_p^τ-η·∇ L_γ(δ̅^RP_n,l(t_k),δ̂^RP_n,l(θ_p^τ;t_k)),
where θ_p^τ is the parameter including the weight and bias during the pre-training phase, η is learning rate, and ∇ L_γ(·,·) is the first-order derivative of the loss function.
After the pre-training phase, the following fine-tuning phase will be performed as shown in both Figs. <ref> and <ref>. The parameter θ_p will be saved and treated as the initial parameter for fine-tuning process, i.e., θ_f^0 = θ_p^T, where θ_f^0 and θ_p^T represent the initial network parameter during fine-tuning phase and the network parameter updated at the last iteration T in pre-training phase, respectively. Notice that the architecture of neural network and the number of nodes in fine-tuning phase are designed to be the same as those in the pre-training phase, where the loss function in fine-tuning can be acquired by replacing the averaged target δ̅^RP_n,l(t_k) in (<ref>) with δ_n,l^RP(t_k) for each RP. Note that we also update the weights and bias of neural networks via the gradient descent method as that in (<ref>) by substitute θ_p^τ with θ_f^τ. With the initial parameter acquired from the pre-training phase, the gradient descent can converge faster to find the optimum solution.
After completion of pre-training and fine-tuning phases, the parameter θ_f will be saved into the model parameter database to reconstruct the real-time radio map α̂^RP_n,l(t_o) at the online phase. As shown in the lower-right of <ref>, the time-variation δ̂^RP_n,l(t_o) is predicted based on the online monitored MP's RSS difference δ^MP_m,l(t_o) and the trained model parameter θ_f via (<ref>). Once the network completes the calculation in online phase and outputs the prediction of δ̂^RP_n,l(t_o), the element stored in the adaptive RP database in the online phase can be represented as
α̂_n,l^RP(t_o) = δ̂^RP_n,l(t_o)+α_n,l^RP(t_e).
Note that α̂_n,l^RP(t_o) represents the real-time adaptive radio map, which will be used to estimate the user position by the following algorithm. Different from the linear property of , can solve the nonlinear problem of radio signal propagation such as human signal-blocking based on model training and adaptation.
§.§ ()
In order to accurately estimate the user position in the online phase, the proposed scheme takes into account the signal variance caused by time-varying effects. As shown in the top-right part of <ref>, is implemented based on real-time RSS collected from user α_l^U(t_o), cluster information with RP n ∈ C(m) acquired from , and the adaptive RSS value α̂_n,l^RP(t_o) for the n^th RP via . The modified Euclidean distance (MED) of online received RSS values from the l^th AP between the n^th RP and user is derived as
d_n,l(t_o) = ∑_m=0^Mα̂_n,l^RP(t_o)-α_l^U(t_o)/σ_m(α̂_n,l^RP(t_o)),
where σ_m(·) is the weight scaling function that gives real-time estimated standard deviation of RSS from the n^th RP within m^th cluster. Higher value of σ_m(α̂_n,l^RP(t_o)) indicates that the n^th RP's RSS from the l^th AP is less reliable among all RPs within C(m). Consequently, the corresponding weight w_n,l(t_o) can be chosen as the inverse of estimated MED of each RP n as
w_n,l(t_o) = 1/d_n,l(t_o).
The total set of weights can be defined in a sorted set as w_l={w_n,l|w_n,l > w_n',l, n<n' ∀ n, n' ∈ C(m)}, whilst we select the indexes with the first k largest weights to be w_k,l = {w_1,l,…,w_n,l,…,w_k,l | n ∈w_l }. Therefore, the estimated user position x̂(t_o) = [x̂(t_o), ŷ(t_o)] acquired at time instant t_o can be computed by
x̂(t_o) = ∑_l∈ N_ap∑_w_n,l∈w_k,lw_n,l·x_n/N_ap·∑_w_n,l∈w_k,l w_n,l,
where x_n is the geometric location of the n^th RP. The proposed scheme incorporates both cluster information from and the RSS relationship between the user and RPs. By leveraging the cluster information, RPs having similar RSS values but located farther away are excluded from the top k^th weighting elements, which enhances the accuracy of location estimation.
§ PERFORMANCE EVALUATION
§.§ Simulation Results
We firstly evaluate the performances of proposed system including , and schemes via simulations. We employ Wireless InSite which is a widely-adopted simulation software to emulate ray-tracing based indoor wireless propagations. We consider a two-room scenario with each room size of 8 × 10 m^2 as shown in <ref>, where three APs (marked as large green squares) are deployed at the room corners operating at the frequency of 2.4 GHz. There are 56 RPs (marked as small red squares) evenly distributed with a inter-RP distance<ref>[1]In our proposed system, the RPs are placed in a pattern of uniform grids. We empirically find in our several experiment trials that the RP location will not substantially affect the performance of the proposed scheme. The deployment of RPs can potentially enhance the performance, which is proved in some existing works. However, the optmal RP deployment requires a much more complex scheme, which is out of the scope of this paper and can be left as the future work.of 1.2 m, whilst 89 random test points (TPs) are set in both rooms. As depicted in <ref>, two different cases are considered as follows: (a) both rooms are empty and (b) left part of top room is crowded with people and bottom room is vacant. We sample 10 time slots to generate different channel conditions for each RP, and the weights in (<ref>) is chosen as ω = [1/3, 1/3,1/3]. Furthermore, the number of nodes in hidden layers of FCN are designed as { 64, 256, 512, 128, 64 } as shown in <ref>. The learning rate η and threshold γ in is set as 0.1 and 1, respectively. The rectified linear unit (ReLU) is selected as the activation function f^(h)(·) in (<ref>). The number of nearest neighbor is set to k=3 in scheme. The volume of pre-training and training data is 1120 and 2240 samples, which corresponds to 20 and 40 samples per RP, respectively. Table <ref> summarizes the parameter setting in simulations.
Table <ref> elaborates the computational complexity of CODE-LR and CODE-NN compared to the existing method of CSE <cit.>. CSE has the highest complexity order of 𝒪(N^3 × N_rp× N_ap×κ), where N indicates the data size and κ stands for the kernel size. Note that N^3 comes from the cross-comparison of input in support vector regression mechanism, whereas κ will depend on what kind of kernel is adopted for cross-comparison. The proposed CODE-LR scheme has the lowest computational complexity order of 𝒪(N_rp× N_ap), which is only proportional to the network deployment size N_ap as well as measuring points N_rp. Since the input feature of CODE-LR requires only RSS from the MP, the dimension of input feature becomes N=1, which therefore has a lower complexity order than CSE. On the other hand, CODE-NN possesses a moderate computational complexity order of 𝒪(N × N_mp×∏_i=1^N_LK_i^2× N_rp× N_ap), where additional complexity comes from neural network layers N_L and the corresponding neurons denoted by K_i for the i-th layer.
The SSP-skeleton generated from algorithm is shown in <ref>, where the black lines are skeletons and black dots are vertices. Note that the red crosses represent the RPs to be tested. In <ref>, the clustering result shows that the RPs are divided into 6 clusters by employing only the difference of RSS d_RSS(i,j) in (<ref>) among RPs. However, only considering the RSS difference leads to the two clusters in the bottom room overlapping each other with RPs belonging to a cluster located farther away. <ref> shows the clustering result by only adopting the map information of each RP, i.e., using the shortest path d_SSP(i,j) in (<ref>), where all generated clusters will not overlap with each other due to the characteristic of SSP. Nonetheless, the clustering result is unable to reflect wireless signal propagation. <ref> demonstrates the clustering results by utilizing the time-varying RSS difference δ_i,j in (<ref>). It reveals that taking time-variation into account will only generate 2 clusters, causing them to overlap and even across the two-room partition. By implementing the proposed scheme which considers all factors in (<ref>), 6 clusters are automatically generated from all RPs as shown in <ref> where the cluster heads are chosen as the locations of MPs for the corresponding clusters. The benefits of considering all three factors reveal that each clusters will not largely overlap with the others.
<ref> shows the performance evaluation under different parameters of the scheme. <ref> demonstrates the resulting number of clusters and localization error under different values of self-similarity preference s_j,j. We evaluate proposed and in terms of localization errors with the corresponding number of clusters from . It can be observed that a smaller preference value will generate fewer clusters. However, a small number of clusters cannot reconstruct the database efficiently, which leads to a higher localization error in both schemes. The lowest error is reached at the preference value of s_j,j=M_d(S_j)=-1.18 in (<ref>) under , which is chosen as our preference value without the limitation of the number of MP. <ref> compares the cumulative distribution function (CDF) of localization error in the crowded environment by using different similarity measures such as difference of RSS amplitude, SSP for map information, and time-variation of RSS, and combined factors using to choose MPs among the clustered RPs. Note that the databases in these four cases are constructed by , whilst the curve named original database indicates the utilization of fingerprint database in empty environments. It can be seen that the proposed with adaptive database achieves the lowest localization error, which outperforms the other methods suffering from time-varying signal blockages and reflection. Again, this can be further emphasized with the aid of Fig. <ref>. As illustrated in Figs. <ref> and <ref>, using only d_SSP for clustering results in separation based on geometric relationships, which neglects the other crucial factors such as d_RSS capturing the impact of path loss caused by indoor environments. On the other hand, δ_i,j takes into account dynamic signal strength fluctuations caused by the presence of people, as shown in Fig. <ref>. Disregarding these factors can lead to significant deviations in the estimated positions. To elaborate a little further, the simulations presented in this study provide a simplified scenario for evaluating the performance of the proposed clustering algorithm. It is important to acknowledge that disregarding these critical factors can have even more severe consequences in real-world experiments. Real-world environments impose additional challenges of interference and channel variations that can further affect the performance of positioning systems. Therefore, it becomes compellingly essential to consider these factors when designing and implementing positioning systems in practical scenarios.
<ref> shows the CDF of localization error using different learning rates η in to reconstruct the online database. The lines from top to bottom represent η= {0.001, 0.01, 0.1, 1 } and original fingerprint database, respectively. The result shows that the model has the largest error when η = 1 since large learning rate potentially diverges the loss function, whilst the database adopting η=0.001 will be inefficient, since the model over-fitting is induced during data training. Moreover, using the database reconstructed under η=0.1 can properly converge the loss function and avoid the over-fitting issue, which provides a better performance. Hence, we select η=0.1 as the learning rate in our scheme in the following simulations. <ref> shows the CDF of localization error when adopting different thresholds γ in the loss function of pre-training in (<ref>) and fine-tuning phases to re-establish the database. The curves include parameters of γ= {0.01, 0.1, 1, 10 } and original database. The localization error shows that the loss function is unable to filter the outlier when γ is set to γ=10, which causes the reconstructed database unavailable. However, the loss function will treat every output as outlier when we set the threshold as γ=0.01, which leads to erroneous back propagation of parameters. The CDF of localization error when γ=1 outperforms the others, which can successfully filter out the outliers. Therefore, we set the threshold as γ=1 in our purposed scheme.
<ref> shows the predicted RSS errors at different RPs by adopting original fingerprint database, , and . In all three subplots, the two-dimensional coordinates [x,y] is utilized to represent RP's locations described in <ref>. The predicted RSS error is calculated as ε_n,l(t_o) = |α̂^RP_n,l(t_o) -α^RP_GT(t_o)|, where α^RP_GT(t_o) represents the ground truth of RSS at t_o and l=1 is adopted by using the RSS from the AP located at the upper-left corner of <ref>. It can be observed that both and methods can reduce RSS error from the original database by reconstructing the adaptive RP database. The largest error can be seen at RP's location [x,y]=[2,7] in <ref>, which indicates that the area with crowded people causes higher RSS errors, with the original database collecting RSS under an empty scenario. The result in <ref> shows the regression method in . It can reduce most of RSS errors but has a difficulty to surpress the peak error due to linear operation of regression. <ref> illustrates that the proposed can perfectly reconstruct the radio map with compellingly low RSS errors thanks to its nonlinear mapping in deep neural networks, which outperforms . In addition to propagation decay and cluster information in , considers time-varying effect in different environments, which achieves the lowest localization errors among the other schemes.
<ref> shows the localization errors by adopting scheme in SALC system under different database and k values as indicated in (<ref>). Note that the original and true databases indicate using the RSS collected from empty and crowded environments, respectively, whilst the crowded one is referred as true database, since it is the realistic case to be dealt with. The result illustrates that using the original database has the largest location error under different k values; while the database reconstructed by and can significantly reduce the error. Additionally, using database is close to adopting the ground truth database, which means we can successfully reconstruct the radio map. It is also shown in the figure that the localization error will decrease with larger k when k≤3, whereas there is no benefit for k larger than 3 under all four cases. The reason is that a larger k means to take RPs with lower weights into consideration, which are irrelevant to the user's location. Meanwhile, smaller k may cause the chosen RPs to contain insufficient information, which leads to inaccurate predicted location. Accordingly, we select k=3 in in the following simulations and experiments.
<ref> shows the performance comparison between proposed and conventional WkNN schemes under k=3 with four different types of database. It can be seen that the location errors estimated based on proposed and schemes outperform that from the original database, which suffers from both time-variation and noise. On the other hand, comparably smaller error is generated from true crowded database which is mostly caused by the noise from RSS and RP distances. By taking the time-varying effect into consideration, effectively reconstructs the online database and achieves similar indoor positioning performance compared to that from true database. Meanwhile, <ref> also reveals that outperforms WkNN with the adoption of those four types of database. For estimating the user's location, WkNN may choose the RPs with similar RSS values even they are located farther away from the user location, whilst our proposed will filter those outlier RPs by adopting the cluster-based feature scaling weight in (<ref>).
<ref> illustrates the performance comparison on the CDF of localization errors among CsLE with original and true database, SALC with CODE-NN and CODE-LR, and existing schemes from <cit.> and <cit.>. Note that RWKNN adopts the original database and CSE reconstructs the database via conventional WkNN. We can observe that the RWKNN method performs the worst performance with localization error of around 2 m under CDF of 0.5 since it did not consider time-varying effects. Although the original database still encounters time-varying problem, the CsLE scheme can effectively mitigate the problems of both signal fluctuation and selecting farther RPs as neighbor nodes. Additionally, it reveals that the localization error decreases as the fingerprinting database is updated. Furthermore, our proposed SALC scheme achieves the lowest localization error of approximately 1 m under CDF of 0.5 by dynamically generating the fingerprinting database in time-varying environments.
§.§ Experiment Results
Experiments have been conducted to verify the effectiveness of system in realistic environments. <ref> shows the testing field of experiments including both the classroom and the corridor. We consider break and in-class time representing empty and crowded cases, respectively. The size of the experimental scene <ref>[2]If the new scene has a different layout, it will be necessary to redeploy the RPs to new locations. This process involves re-clustering the RPs and re-selecting the MPs. Consequently, data from different scenarios in the new scene will need to be collected to adapt ROMAC and CODE algorithms. As a future extension, transfer learning and domain adaptation approaches can be leveraged to address the challenges of retraining the system for different layouts. is 9.65 × 10.65 m^2, where 42 RPs are distributed with inter-RP distance of 1.2 m for database collection, and 26 TPs are determined to evaluate the positioning accuracy. We use the mobile device of ASUS Zenfone to collect RSS on each RP from the 3 APs, which are ASUS RT-AC66U operating at 2.4 GHz. We use the same model of mobile devices serving as MPs during the online phase, which means that there is no additional functional requirement and overhead for deploying MPs in our experiments. The server employs these RSS values to generate an updated radio map and to estimate the user location, which is therefore transmitted back to the mobile device. Since most of training and computing are conducted at server side, the mobile device collecting data and received positioning results has negligible computation during the process. We collect 100 samples on each RP in both break and in-class time, which takes around 1 hour in each case. Note that the crowded true database is not feasible to be collected in practical scenarios, and we establish it mainly to serve as the ground truth for performance comparison. The amount of pre-training and of training data is 4200 and 8400, respectively.
The other system parameters are chosen to be the same as those in simulations.
<ref> illustrates the layout and deployments of APs (black triangles), RPs (red crosses), and TPs (blue points), whilst <ref> shows the result by adopting proposed algorithm. With the consideration of hardware limitation in a practical scenario, we adjust the preference value s_j,j=-5 in (<ref>) such that the resulting number of clusters will be limited to 3. It can be seen from <ref> that all RPs on the corridor are in the same cluster since the SSP information in (<ref>) is taken into account in . The RPs in the testing classroom are divided into two clusters due to human blockage effects, which will reflect the time-varying RSS as considered in .
Figs. <ref> and <ref> show the positioning results of proposed SALC system in real-time environment. The cases of using empty and crowded databases indicate that the RSS data are collected in break time and in-class time, respectively. <ref> shows the results of noise-free environment, where the mean error of using crowded database is the largest of 2.83 m among the other databases since the RSS from the in-class time database suffers from human blocking effects. Note that the empty database with localization error of 2.16 m is treated as the ground truth database in break time. It can be observed that the errors from proposed and are respectively reduced to 2.31 and 2.33 m compared to that acquired from crowded database during in-class time. Furthermore, <ref> illustrates the experimental results under noisy environments, where the mean localization error of using the empty database is the largest of 2.68 m since the database was not collected with signal blockage features. Note that the crowded database is treated as the ground truth one in this case resulting in 2.11 m of positioning error. It can be seen that the proposed method achieves the error of 2.25 m, whilst results in the same smallest 2.11 m error as that from ground truth crowded database.
<ref> shows the overall performance comparison of proposed SALC system with existing RWKNN <cit.> and CSE <cit.> methods, where the results of mean localization error are averaged test data from both noise-free and noisy environments. The bars from left to right are RWKNN utilizing empty and crowded databases, traditional WkNN with the database constructed by CSE, proposed with empty and crowded databases, and adopting and to reconstruct the databases. It can be seen that outperforms conventional RWKNN under both empty and crowded databases. Note that the localization errors are higher in crowded than empty database for both schemes. The main reason is that RSS collected for crowded data can possess high signal variations due to human movements in realistic scenarios, which incurs infeasible establishment of crowded database. Furthermore, the proposed and methods can improve localization performance by adaptively adjusting and reconstructing databases for different environments. Notice that the CSE method only constructs the database by current information with the worst performance due to the complicate multipath effects. Meanwhile, performs better than since it considers the nonlinear features of time-varying effects between noise-free and noisy environments.
§ CONCLUSION
In this paper, we have designed an system for indoor positioning including , , and sub-algorithms. is designed to simultaneously solve the RP clustering and MP deployment in order to monitor the time-varying problem and reconstruct the adaptive fingerprinting database. establishes adaptive online database by employing linear regression and neural network techniques based on the cluster information from scheme. At last, predicts user position by matching the user's real-time RSS with the adaptive database based on cluster information and predicted signal variation. Although we can reconstruct the radio map precisely, there still exist real-time uncertainties in experiments, such as multipath, noise, and interference from in-class and break time, which limits the performance from theory to implementation. In the future work, we will consider complicated environmental scenarios for further performance enhancement. Nevertheless, the merits of proposed system can still be observed from both simulations and experiments by improving the performance of fingerprinting-based localization under practical time-varying environments.
IEEEtran
|
http://arxiv.org/abs/2307.10059v1 | 20230714090400 | Searching for additional Higgs bosons at ATLAS | [
"Anna Kaczmarska"
] | hep-ex | [
"hep-ex",
"hep-ph"
] |
=6.0in =8.25in
=-0.3in =-0.20in
#1
#1
#1
#1
#1
#1
and
#1
Submitted to #1
Abstract
Presented
PRESENTED AT
Searching for additional Higgs bosons at ATLAS
Anna Kaczmarska
on behalf of the ATLAS Collaboration
Institute of Nuclear Physics PAN, Cracow, Poland
Extending the Higgs sector by introducing additional scalar fields to account for the electroweak symmetry breaking, can provide solutions to
some of the questions the Standard Model fails to answer.
Introducing additional scalar fields leads to extra Higgs-like particles, which can be either neutral or charged.
These proceedings present some recent direct searches for additional Higgs bosons, using proton–proton collision data at 13 TeV collected
by the ATLAS experiment in Run 2 of the LHC.
DIS2023: XXX International Workshop on Deep-Inelastic Scattering and
Related Subjects,
Michigan State University, USA, 27-31 March 2023
< g r a p h i c s >
Copyright [2023] CERN for the benefit of the ATLAS Collaboration. CC-BY-4.0 license.
§ INTRODUCTION
Since the discovery of the Higgs boson by the CMS and ATLAS Collaborations in 2012, its properties have been measured to increasing precision.
So far, an excellent agreement with the predictions for a Standard Model (SM) Higgs boson is observed.
However, the SM, while highly successful, is not considered to be a complete theory as it is not capable of explaining
some of the phenomena seen in nature.
Extending the Higgs sector by introducing additional scalar fields to account for the electroweak symmetry breaking, can provide solutions to
some of the questions the SM fails to answer.
Introducing additional scalar fields leads to extra Higgs-like particles, which can be either neutral or charged.
These proceedings give examples of some recent direct searches for additional Higgs bosons, using proton–proton collision data at 13 TeV collected
by the ATLAS experiment <cit.> in Run 2 of the LHC.
§ HEAVY NEUTRAL HIGGS BOSON SEARCHES
tt̅H/A → tt̅ tt̅ in the multilepton final state This search for a new heavy scalar or pseudo-scalar Higgs boson (H/A) produced in association with a pair of top quarks,
with the Higgs boson decaying into a pair of top quarks (H/A → t t̅) <cit.> is motivated by type-II
two-Higgs-doublet models (2HDM) <cit.>.
The tt̅ H/A production mode provides a promising channel as inclusive searches for H/A → t t̅
are challenging due to destructive interference with the SM top pair production.
The analysis targets a final state with exactly two leptons with same-sign electric
charges or at least three leptons.
A boosted decision trees classifier is trained to distinguish the signal from the SM background.
No significant excess of events over the SM prediction is observed and thus upper limits are placed on
the tt̅ H/A production cross-section times the branching ratio of H/A → t t̅ as a function
of m_H/A, Figure <ref> (Left).
Heavy scalar decays in final states with multiple leptons and b-jets The presented analysis targets search for heavy scalars with flavour-violating decays in final states with multiple
leptons and b-jets <cit.>.
It is motivated by general 2HDM without Z2 symmetry, where the heavy Higgs bosons feature flavour changing neutral Higgs couplings.
Only couplings involving top quarks and two other up-type quarks (ρ_tt, ρ_tc, and ρ_tu) are considered.
The final states of interest are same-sign top quark pair, three top quarks, or four top quarks.
A deep neural network is trained to discriminate the signal from the backgrounds.
A mild excess is observed over the SM expectation corresponding to a local significance of 2.81σ
for a signal with m_H = 1000 GeV and ρ_tt = 0.32, ρ_tc = 0.05, and ρ_tu = 0.85.
Exclusion limits at 95% confidence are set on the mass and couplings of the heavy Higgs boson.
An observed significance for m_H = 1000 GeV as a function of the three couplings is shown in Figure <ref> (Right).
Flavour-changing neutral-current t → qX (q=u,c) → qbb One of the simplest extensions to the SM is the Froggatt-Nielsen mechanism <cit.>, which introduces a non-SM Higgs field, X, with flavour charge,
the so-called flavon.
The presented analysis <cit.> is a generic search for top quark pair production where one of the top quarks decays to a light scalar particle X,
with X → b b̅, and an up-type quark (u or c).
Events are categorised according to the multiplicity of jets and b-jets, and a neural network is used to discriminate between signal and background processes.
No significant excess above the expected SM background is found and the 95% CL upper limits on B(t → u/cX) × B(X → b b̅)
are obtained as presented in Figure <ref>.
A local excess of 1.8σ is seen in the t → uX channel at m_X = 40 GeV.
Also, a roughly 2σ excess can be seen in the t → cX observed limit over almost the entire range of m_X.
This excess is not compatible with the presence of a scalar particle X, which would show up as a narrower, resonance-like excess.
§ CHARGED HIGGS BOSON SEARCHES
Light H^±→ cb produced in t → H^±b decays The search focuses on top quark pair production, where one top quark decays into a leptonically decaying W boson and
a b-quark, and the other top quark may decay into a H^± boson and a b-quark <cit.>.
The H^± boson decays into a b- and a c-quark are considered with m_H^±= 60-160 GeV.
The final state consists of a single lepton, high multiplicity of jets and three or more b-jets.
A mass parametrised neural network is used to separate signal from background.
In the absence of a significant excess of data events above the SM expectation, exclusion limits at 95% CL
on the product of branching fractions B(t → H^±b) × B(H^±→ cb) are set in function of
m_H^± as presented in Figure <ref> (Left).
A 3σ local (2.5σ global) broad excess is observed for m_H^±= 130 GeV.
The excess is consistent with the H^± resolution degraded by the ambiguity in choosing the correct b-jet
to reconstruct H^± mass.
H^±±→ l^± l^± decays Various beyond SM theories, for example left-right symmetric models (LRSMs) <cit.> and the Zee–Babu neutrino mass model <cit.>,
predict doubly charged bosons.
At the LHC, they would be mainly produced via Drell–Yan production.
Presented search of H^±± <cit.> focuses on small vacuum expectation value of the Higgs triplet, where only leptonic decays
of H^±± are relevant.
The analysis searches for same-charge lepton pairs in final states with two, three or four leptons.
The discriminant variable used in the final fit is invariant mass of the leading lepton pair.
In absence of a significant deviation from expectations, 95% CL limits are derived.
They are presented in Figure <ref> (Right).
§ EXOTIC DECAYS OF HIGGS BOSON (125 GEV) SEARCHES
Dark photons from Higgs boson decays via ZH production The dark photons are searched for in the decay of Higgs bosons H →γγ_d produced through
the Z(→ l^+ l^-)H production mode <cit.>.
In presented analysis the vector portal is considered where the interaction results from the kinetic mixing between
one dark and one visible Abelian gauge boson.
Both massless and light dark photons γ_d (up to 40 GeV) are considered.
The final state of interest consists of two same-flavour, opposite-charge light leptons, an isolated photon and
missing transverse momentum from undetected γ_d.
A boosted decision trees classifier is used to separate signal from background.
As no excess is observed with respect to the SM prediction, an observed
upper limit on the branching ratio BR(H →γγ_d) of 2.28% is set at
95% CL for massless γ_d.
§ CONCLUDING REMARKS
Searches for additional Higgs bosons are strongly motivated by theory.
The ATLAS Collaboration has performed a wide range of such searches, covering a large variety
of different production and decay modes.
A full review of these searches is beyond the scope of these proceedings and the reader is encouraged
to visit the ATLAS public results webpage <cit.>.
So far, no significant hint for physics beyond the SM has been observed and also
many interesting models and regions of phase space remain unexplored.
However there are many interesting ongoing searches besides the ones covered in this article.
The LHC Run 3 datasets will allow to revisit some of the interesting excesses observed in the Run 2
analyses and hopefully to discover new physics.
Funding information This work is supported in part by the Polish Ministry of Education and Science project no. 2022/WK/08.
9
atlas
ATLAS Collaboration, JINST 3 (2008) S08003.
htt
ATLAS Collaboration, arXiv:2211.01136 [hep-ex], submitted to JHEP.
2hdm
G. C. Branco et al., Phys. Rept. 516 (2012) 1, arXiv: 1106.0034 [hep-ph].
hmultil
ATLAS Collaboration, ATLAS-CONF-2022-039, https://cds.cern.ch/record/2815674/files/ATLAS-CONF-2022-039.pdf.
froggatt
C. D. Froggatt and H. B. Nielsen, Nucl. Phys. B. 147 (1979) 277.
flavon
ATLAS Collaboration, arXiv:2301.03902 [hep-ex], submitted to JHEP.
lighthplus
ATLAS Collaboration, arXiv:2302.11739 [hep-ex], submitted to JHEP.
3hdm1
A. G. Akeroyd, S. Moretti, K. Yagyu and E. Yildirim, Int. J. Mod. Phys. A 32 (2017) 1750145, arXiv: 1605.05881 [hep-ph].
3hdm2
A. G. Akeroyd, S. Moretti and M. Song, Phys. Rev. D 98 (2018) 115024, arXiv: 1810.05403 [hep-ph].
lrsm
G. Senjanovic and R. N. Mohapatra, Phys. Rev. D 12 (1975) 1502.
zee
A. Zee, Phys. Lett. B 161 (1985) 141.
K. S. Babu, Phys. Lett. B 203 (1988) 132.
h++
ATLAS Collaboration, arXiv:2211.07505 [hep-ex], submitted to Eur. Phys. J. C.
darkpho
ATLAS Collaboration, arXiv:2212.09649 [hep-ex], submitted to JHEP.
atlaswebpage
https://twiki.cern.ch/twiki/bin/view/AtlasPublic
|
http://arxiv.org/abs/2307.04853v1 | 20230710185012 | Spatially-Resolved Recent Star Formation History in NGC 6946 | [
"Debby Tran",
"Benjamin Williams",
"Emily Levesque",
"Margaret Lazzarini",
"Julianne Dalcanton",
"Andrew Dolphin",
"Brad Koplitz",
"Adam Smercina",
"O. Grace Telford"
] | astro-ph.GA | [
"astro-ph.GA"
] |
0000-0002-6440-1087]Debby Tran
Department of Astronomy, Box 351580, University of Washington, Seattle, WA 98195, USA
0000-0002-7502-0597]Benjamin Williams
Department of Astronomy, Box 351580, University of Washington, Seattle, WA 98195, USA
0000-0003-2184-1581]Emily Levesque
Department of Astronomy, Box 351580, University of Washington, Seattle, WA 98195, USA
0000-0002-0786-7307]Margaret Lazzarini
Division of Physics, Mathematics, and Astronomy, California Institute of Technology, 1200 E California Boulevard, Pasadena, CA 91125, USA
0000-0002-1264-2006]Julianne Dalcanton
Center for Computational Astrophysics, Flatiron Institute, 162 Fifth Avenue, New York, NY 10010, USA
0000-0001-8416-4093]Andrew Dolphin
Raytheon Technologies, 1151 E. Hermans Road, Tucson, AZ 85756, USA
University of Arizona, Steward Observatory, 933 N. Cherry Avenue, Tucson, AZ 85721, USA
0000-0001-5530-2872]Brad Koplitz
School of Earth & Space Exploration, Arizona State University, 781 Terrace Mall, Tempe, AZ 85287, USA
0000-0003-2599-7524]Adam Smercina
Department of Astronomy, Box 351580, University of Washington, Seattle, WA 98195, USA
0000-0003-4122-7749]O. Grace Telford
Department of Physics and Astronomy, Rutgers University, 136 Frelinghuysen Road, Piscataway, NJ 08854, USA
The nearby face-on star forming spiral galaxy NGC 6946 is known as the Fireworks Galaxy due to its hosting an unusually large number of supernova. We analyze its resolved near-ultraviolet (NUV) stellar photometry measured from images taken with the Hubble Space Telescope's (HST) Wide Field Camera 3 (WFC3) with F275W and F336W filters. We model the color-magnitude diagrams (CMD) of the UV photometry to derive the spatially-resolved star formation history (SFH) of NGC 6946 over the last 25 Myr. From this analysis, we produce maps of the spatial distribution of young stellar populations and measure the total recent star formation rate (SFR) of nearly the entire young stellar disk. We find the global SFR(age≤25 Myr)=13.17 +0.91
-0.79 M_⊙/ yr. Over this period, the SFR is initially very high (23.39+2.43
-2.11 M_⊙/ yr between 16-25 Myr ago), then monotonically decreases to a recent SFR of 5.31+0.19
-0.17 M_⊙/ yr in the last 10 Myr. This decrease in global star formation rate over the last 25 Myr is consistent with measurements made with other SFR indicators. We discuss in detail two of the most active regions of the galaxy, which we find are responsible for 3% and 5% of the total star formation over the past 6.3 Myr.
§ INTRODUCTION
Star formation rate (SFR) is one of the defining characteristics in determining the current evolutionary state of a galaxy. The SFR strongly affects evolution through metal production <cit.>, gas consumption <cit.>, cold gas content <cit.>, and feedback in the galaxy <cit.>. Thus, SFR is a key property in tests of galaxy evolution models <cit.>. Because of its significance, many methods of measuring star formation rate with observational data have been developed, such as measuring UV emission from young (≲10 Myr) massive stars <cit.>, Hα emission from the youngest (≲5 Myr) massive stars <cit.>, and estimating SFR from the rate of core-collapse supernova (ccSN) <cit.>, which probes SFR at timescales at which stars supernova (30-100 Myr ago). These methods probe a range of timescales, making direct comparisons between SFR measured with different indicators challenging. Star formation histories (SFHs) avoid this problem by providing the SFR over time, allowing us to compare and calibrate SFR measurements obtained with different methods, while revealing how galaxies change over time.
Even more powerful are spatially-resolved star formation histories, which provide both temporal and spatial information. With these, we can trace local mechanisms that could be triggering star formation. For nearby galaxies with resolved stellar photometry, we can construct and fit observed color-magnitude diagrams (CMDs) to infer a star formation history for a specific region, assuming a specific initial mass function (IMF), stellar evolutionary model, binary fraction, and distribution of dust. By tiling together SFHs from multiple regions, we can construct a spatially-resolved star formation history for a galaxy. This kind of work has been done in the Small and Large Magellenic Clouds <cit.>, M31 <cit.>, M33 <cit.>, and M81 <cit.>.
In this paper, we apply this technique to NGC 6946, which has been widely studied due to its active star formation (<cit.> classify it as a circumnuclear starburst) and the high frequency of supernovae in the past century <cit.>. Among these studies, there have been inconsistent measurements of the galaxy's global star formation rate, ranging widely from 3-12 M_⊙/ yr, due to the diverse methods of measuring star formation rate and wide range of different distances used. Throughout this paper, we use a distance of 7.83 ± 0.29 Mpc <cit.> and an inclination of 32.8 <cit.>. <cit.> has explored the accuracy of these various diagnostics for a sample of regions in NGC 6946, finding discrepancies of up to factors of 5. To better constrain the recent star formation history across the entire galaxy, we have carried out a NUV HST survey to obtain photometry of the young massive stellar population of NGC 6946. This dataset provides the most detailed and complete probe to date of the global, localized, and episodic star formation in NGC 6946.
In Section <ref>, we present the HST observations, alignment of the data, photometry, artificial star tests for measuring photometric uncertainties, gridding schema, and method for measuring star formation rates. In Section <ref>, we present the recent star formation rates, the reliability of the SFRs at the youngest and oldest time bins, total stellar mass formed over the past 25 Myr, and foreground and differential extinction of each cell. In Section <ref>, we discuss two highly star-forming regions of interest, the decline in global star formation rate, and the correlation between stellar density and age. In Section <ref>, we summarize our methods and findings.
§ OBSERVATIONS AND DATA ANALYSIS
Observations for this program (GO-15877; PI <cit.>) were obtained between May 11 2020 and November 21 2021 using HST's WFC3 Ultraviolet- (UVIS) channel in filters F275W and F336W. Details of the observations are found in Table <ref>. NGC 6946 was imaged in a 4x4 grid excluding the northernmost and southernmost regions (Figure <ref>). This covers all of the UV-bright regions and the locations of observed core collapse supernovae. Each neighboring pointing overlaps at the edges to ensure there are no gaps in the catalog due to poorer photometric quality at the edges. Each pointing in both filters was dithered with small offsets to control for hot pixels and cosmic rays. Unfortunately, even with the careful selection of observing strategy, there are two small gaps of 10×1 and 30×1 approximately centered at 20:34:39.40 +60:06:80 and 20:34:25.00 +60:08:80, respectively. These gaps are due to adjusting the rotation of two pointings to obtain a sufficient number of guide stars. Upon comparison with existing optical data, there do not appear to be dense star clusters in these two gaps.
cccccccc
1
Details of Fields Observed
0pt
Field Name R.A. Dec Filters
Exposure Time Number of Date Roll Angle
(hh:mm:ss.sss) (::.)
(s) Exposures (YYYYMMDD) (PA_V3)
NGC6946-2 20:35:06.007 +60:12:48.93 F275W 1432 2 20201106 257.0
NGC6946-2 20:35:06.007 +60:12:48.93 F275W 1414 2 20201106 257.0
NGC6946-3 20:34:38.218 +60:12:59.17 F275W 1432 2 20201109 256.5
NGC6946-3 20:34:38.218 +60:12:59.17 F275W 1414 2 20201109 256.5
NGC6946-4 20:35:19.114 +60:10:49.51 F275W 1432 2 20201110 255.7
NGC6946-4 20:35:19.114 +60:10:49.51 F275W 1414 2 20201110 255.7
NGC6946-5 20:34:51.355 +60:10:59.92 F275W 1432 2 20201103 257.0
NGC6946-5 20:34:51.355 +60:10:59.92 F275W 1414 2 20201103 257.0
NGC6946-6 20:34:23.591 +60:11:09.97 F275W 1432 2 20201103 257.0
NGC6946-6 20:34:23.591 +60:11:09.97 F275W 1414 2 20201103 257.0
NGC6946-7 20:35:32.194 +60:08:50.01 F275W 1432 2 20201112 257.0
NGC6946-7 20:35:32.194 +60:08:50.01 F275W 1414 2 20201111 257.0
NGC6946-8 20:35:04.464 +60:09:00.59 F275W 1432 2 20201112 253.5
NGC6946-8 20:35:04.464 +60:09:00.59 F275W 1414 2 20201112 253.5
NGC6946-9 20:34:36.730 +60:09:10.81 F275W 1371 2 20210504 75.5
NGC6946-9 20:34:36.730 +60:09:10.81 F275W 1390 2 20210504 75.5
NGC6946-10 20:34:09.061 +60:09:21.20 F275W 1432 2 20200515 73.0
NGC6946-10 20:34:09.061 +60:09:21.20 F275W 1414 2 20200515 73.0
NGC6946-11 20:35:17.548 +60:07:01.19 F275W 1432 2 20201112 253.0
NGC6946-11 20:35:17.548 +60:07:01.19 F275W 1414 2 20201112 253.0
NGC6946-12 20:34:49.842 +60:07:11.57 F275W 1432 2 20201113 253.0
NGC6946-12 20:34:49.842 +60:07:11.57 F275W 1414 2 20201113 253.0
NGC6946-13 20:34:22.203 +60:07:22.13 F275W 1432 2 20200511 73.0
NGC6946-13 20:34:22.203 +60:07:22.13 F275W 1414 2 20200511 73.0
NGC6946-14 20:35:02.928 +60:05:12.26 F275W 1432 2 20201107 257.0
NGC6946-14 20:35:02.928 +60:05:12.26 F275W 1414 2 20201107 257.0
NGC6946-15 20:34:35.318 +60:05:22.98 F275W 1362 2 20211114 257.0
NGC6946-15 20:34:35.318 +60:05:22.98 F275W 1361 2 20211115 257.0
NGC6946-2 20:35:06.007 +60:12:48.93 F336W 880 3 20201106 257.0
NGC6946-3 20:34:38.218 +60:12:59.17 F336W 880 3 20201109 256.5
NGC6946-4 20:35:19.114 +60:10:49.51 F336W 880 3 20201110 255.7
NGC6946-5 20:34:51.355 +60:10:59.92 F336W 880 3 20201103 257.0
NGC6946-6 20:34:23.591 +60:11:09.97 F336W 880 3 20201103 257.0
NGC6946-7 20:35:32.194 +60:08:50.01 F336W 880 3 20201111 257.0
NGC6946-8 20:35:04.464 +60:09:00.59 F336W 880 3 20201112 253.5
NGC6946-9 20:34:36.730 +60:09:10.81 F336W 865 3 20210504 75.5
NGC6946-10 20:34:09.061 +60:09:21.20 F336W 880 3 20200515 73.0
NGC6946-11 20:35:17.548 +60:07:01.19 F336W 880 3 20201112 253.0
NGC6946-12 20:34:49.842 +60:07:11.57 F336W 880 3 20201113 253.0
NGC6946-13 20:34:22.203 +60:07:22.13 F336W 880 3 20200511 73.0
NGC6946-14 20:35:02.928 +60:05:12.26 F336W 880 3 20201106 257.0
NGC6946-15 20:34:35.318 +60:05:22.98 F336W 870 3 20211114 257.0
Config Mode - WFC3/UVIS Imaging
§.§ Source Detection and Photometry
HST WFC3 NUV photometry were measured using DOLPHOT <cit.>, a stellar photometry package using point spread function (PSF) fitting, described in detail in <cit.>. We generated separate catalogs for each of the 14 overlapping pointings using the same DOLPHOT parameters as in <cit.>. We then combined all of the measurements into a single catalog, described in Section <ref>. In this catalog, we identified sources as reliable, high quality photometry using the metrics of sharpness^2 < 0.2; crowding < 0.7; signal-to-noise ratio (SNR) > 4 in both F275W and F336W; and F275W-F336W color < -1.3, as anything blueward of this color is unphysical for young massive stars, see Figure <ref> for comparison with Padova isochrones. For the analysis in the paper, we used ∼ 81,000 sources that passed the aforementioned quality cuts. The brightest single stars in the Padova log(age)=6.6 isochrone <cit.>, the youngest age we could fit, had a F336W magnitude of 20, so sources with magnitudes brighter would be likely blends. These likely blends, which are noted in the catalog, were included in our analysis as we were interested in the high crowding regions. The impact of including the blends is further discussed in Section <ref>.
§.§ Astrometry and Foreground Stars
Using the high quality photometry, we cross-match our stellar catalog to Gaia Data Release 2 (Gaia DR2; <cit.>; <cit.>). We shifted each frame by the median of the residuals of the sources matched between our catalog and Gaia DR2. The residuals have mean magnitudes on milliarcsecond scales in both right ascension and declination, which is roughly a hundred times smaller than a WFC3 UVIS pixel. The values of the overlapping sources were then averaged.
After finding Gaia matches, we removed likely foreground stars from our analysis. First, we utilized the matched Gaia sources to remove anything with a measured proper motion, as it is a likely foreground star. Then, we applied the following F275W-F336W color cuts to remove brighter, redder sources that are likely foreground stars: F336W < 21, color > 0.7; F336W < 22, color > 1; F336W < 22.5, color > 2; F336W < 23, color > 2.5. The photometric catalog used in this paper can be found as a High Level Science Product in MAST (the Mikulski Archive for Space Telescopes) via doi:[10.17909/gveq-8820]http://dx.doi.org/10.17909/gveq-8820.
§.§ Spatial Mapping
To recover the spatially-resolved SFH, we first divide our full photometric catalog into a custom grid pattern (Figure <ref>), allowing us to recover the SFH in each grid cell independently. We choose a grid pattern such that the size of the cell is based on the stellar density of the cell, which helps equalize the number of stars per cell. This gridding schema ensures the denser regions are divided into finer spatial bins, or cells, taking advantage of the large number of stars available for age constraints. Conversely, the less dense regions are divided into coarser cells to ensure each cell has enough stars to measure reliable ages (see Section <ref> for details).
To generate the cell vertices, we implemented a quadtree algorithm, which operates as follows. First, it counts the number of stars in the specified region. If the number of stars in the region is higher than a certain threshold (100 stars in this study), then it will subdivide into four equal parts. This iterates until it hits a minimum cell size, which is roughly 3×3 (∼ 100 pc×100 pc), chosen because that is the approximate size of clusters in NGC 6946.
§.§ Artificial Star Tests and Completeness
We use artificial star tests (ASTs) to measure the effects of noise, crowding, and bias on the photometry. We injected artificial stars into regions of the galaxy of different stellar densities to assess how well artificial stars of different colors and magnitudes are recovered as a function of stellar density. The input and recovered colors and magnitudes are included as a parameter in the derivation of the SFHs (Section <ref>) to account for these biases.
Because the impacts of crowding are largely density-dependent, we must ensure that our ASTs are fully sampling the wide range in the environment. For each cell, we calculate a stellar density by taking the number of stars that pass the quality cut described in Section <ref> above 25 mag in the F336W filter in the cell and dividing it by size of cell in arcsec^2. We attempted several ways of binning the cells by density, but ultimately separated them into a low density regime (cells with densities less than 11.5 stars/arcsec^2) and a high density regime (cells with densities greater than 11.5 stars/arcsec^2). We illustrate the differences in the depth of the observed data in the low and high density regimes in Figure <ref>.
We further bin these cells by density to generate at least 20,000 artificial stars per density bin to ensure that we have a sufficiently fine grid of artificial stars of different colors and magnitudes. These artificial stars were then run through DOLPHOT and flagged as recovered or unrecovered. Artificial stars are defined as recovered if they pass the same quality cuts we apply to our dataset, described in Section <ref>. For each density bin, we divided recovered and unrecovered stars into bins of width of 0.2 magnitude. We then convolved the ratio of recovered to unrecovered stars with a boxcar function to smooth this ratio and interpolated the magnitudes at which 50%, of the stars are recovered, or simply the 50% completeness limits. The F275W and F336W 50% completeness limits define the magnitude ranges that we fit with models to obtain star formation history measurements, as described in detail in Section <ref>.
We took the mean of the 50% completeness limits in the low density regime to smooth over some of the stochasticity (± 0.1 mag variation in both filters) and determined a mean 50% completeness of 26.02 and 25.88 in the F275W and F336W filters, respectively (Figure <ref>). With so few of the cells in the high density regime, it was computationally feasible to run these artificial star tests for those individual high density cells to determine the cell's individual completeness limit.
§.§ Derivation of the SFHs
We used the CMD-fitting code, MATCH <cit.>, to derive the star formation history of each cell. For each cell, MATCH creates Hess-diagrams or binned CMDs of stars in the cell. MATCH then takes user-defined ranges in age, metallicity, distance, extinction, IMF, and binary fraction to create individual synthetic CMDs for each possible combination of parameters. The individual CMDs generated from given parameters are linearly combined to form composite CMDs, which are compared to the observed CMDs. The best-fit composite synthetic CMDs are then used to infer what ages and metallicities make up the observed cell and its resulting star formation history.
We choose a Kroupa IMF <cit.>, binary fraction of 0.35, and the Padova stellar evolutionary models <cit.>. We use the distance of 7.83 ± 0.29 Mpc to be consistent with <cit.>, who used the same CMD-fitting technique. Because of the short timescale, we fix the metalliciities to be between log(Z)= -0.5 to 0.1 and fix the most recent time bins to have near solar metallicities. The youngest age we could fit was log(age)=6.6. As seen in Figure <ref>, the NUV data barely graze the log(age)=7.5 isochrone for the low density bin and the log(age)=7.4 isochrone for the high density bin. However, for completeness, we fit up to ages of log(age)=7.5. For a more detailed discussion on the reliability of the age range fit, see Sections <ref> and <ref>. A summary of these parameters is provided in Table <ref>.
cc[h!]
2
MATCH Fitting Parameters
0pt
Parameter Values
IMF Model Kroupa
Evolutionary Models Padua2006
Distance 7.83 ± 0.29 Mpc
Distance Modulus 29.47 ± 0.079
A_V 0.8-2.2, steps of 0.1
log(Z) -0.5 - 0.1, steps of 0.1
Binary fraction 0.35
F336W step size 0.1
F275W-F336W step size 0.05
CMD smoothing param 3
F275-F336W -1.3-3.3
Ages (log(yr)) 6.6-7.4 for ρ<11.5
6.6-7.5 for ρ≥ 11.5
Age step size 0.1
ρ = stellar density in stars/arcsec^2
First, we determine the best fit values of foreground extinction (A_V) and differential, or circumstellar, extinction (dA_V) of each cell by running SFH calculations over a coarse grid of A_V and dA_V with A_V between 0.8 to 2.2 with the parameters in Table <ref>. Finding the highest likelihood value of A_V and dA_V, we then redo the same SFH calculations over a finer grid of values in 0.05 increments to find the best fit A_V and dA_V, described in detail in <cit.>. Second, we adopt the best fit A_V and dA_V and rerun the SFH calculations to determine the best fit star formation rate and metallicity per time bin in each cell. Third, we measure the uncertainties of the star formation rates and metallicities by running a hybrid Monte Carlo algorithm, described in detail in Section <ref>. We then combine the SFH of each cell to create a spatially-resolved map of NGC 6946's star formation history, presented in Section <ref>.
§.§.§ Uncertainties
There are a few systematic uncertainties associated with this analysis. First, the choice of binary fraction could have a systematic impact our results. For consistency with other work deriving the recent SFH of galaxies <cit.>, we adopt a binary fraction of 0.35, knowing that massive stars have a binary fraction greater than 0.7 <cit.>. <cit.> showed that uncertainties introduced by choice of binary fraction are small compared to the uncertainties due to dust. Second, choice of stellar evolutionary model has a systematic impact on our results. <cit.> showed that the SFH measured using the Padova versus MIST models differed at ages less than 20 Myr. However, results from individual cells fit with both models agreed within less than 1%. Third, choice of IMF could impact our results. For consistency, we used the Kroupa IMF, which has been widely used for measuring star formation rate in NGC 6946.
To characterize the random uncertainties, we used a hybrid Monte Carlo (MC; <cit.>) implemented within MATCH. These uncertainties scale with the number of stars in each cell, where more stars in a cell result in lower random uncertainties. From each cell's CMD, we generate 10,000 possible SFHs. We then calculate the 1-sigma error by calculating the 68th-percentile of the samples for the cell.
§ RESULTS
In Table <ref>, we present the best fit star formation rates per time bin, number of stars in the cell (N), area in arcsec^2, mean age of the population, A_V, and dA_V, along with their cell indices and vertices. The mean age of the population in each cell is only included for a convenient reference. Some cells can be have a bimodal or trimodal age distribution, so please use this mean age with caution. Some numbers in this table have been rounded to save space, but the full machine readable table for all 2658 cells contain the measurements with full precision.
§.§ Star Formation Rate and Mass Maps
In Figure <ref>, we present maps of the spatially-resolved star formation rate for NGC 6946 in linear time bins, and include a color image from the unWISE catalog <cit.>) in W1 and W2 filters to illustrate that the star formation in the youngest ages is mostly recovered despite the dust and we are not missing much embedded star formation. For every time bin, labeled in the upper left corner of each panel, we create maps with the best fit star formation rate setting the value of intensity of each pixel. These rates are then converted to star formation rate intensity by dividing by the area of the cell in corrected for the inclination of 32.8 degrees <cit.>. In Figure <ref>, we present the spatially-resolved star formation history for NGC 6946 with log time bins for higher time resolution at younger ages.
cccccccccccccccccccccc[h]
3
Sample of SFRs over Time
0pt
i RA-NE Dec-NE RA-NW Dec-NW RA-SE Dec-SE RA-SW Dec-SW N Area SFR 0-6.7 SFR 6.7-6.8 SFR 6.8-6.9 SFR 6.9-7.0 SFR 7.0-7.1 SFR 7.1-7.2 SFR 7.2-7.3 SFR 7.3-7.4 A_V dA_V Age
^2 1e-3M_⊙/yr 1e-3M_⊙/yr 1e-3M_⊙/yr 1e-3M_⊙/yr 1e-3M_⊙/yr 1e-3M_⊙/yr 1e-3M_⊙/yr 1e-3M_⊙/yr Myr
8 308.51220 60.150921 308.48380 60.150921 308.48380 60.136787 308.51220 60.136787 17 2588.9 0.00+0.14
-0.00 0.00+0.57
-0.00 0.00+0.64
-0.00 0.06+0.62
-0.06 1.19+0.12
-1.04 0.00+0.64
0.00 0.00+0.58
0.00 0.00+0.90
0.00 0.80 0.00 11.1
9 308.51220 60.165054 308.48380 60.165054 308.48380 60.150921 308.51220 60.150921 6 2588.9 0.00+0.10
-0.00 0.00+0.41
-0.00 0.00+0.36
-0.00 0.00+0.38
-0.00 0.00+0.43
-0.00 0.00+0.42
-0.00 1.04+0.26
-0.83 0.00+0.86
0.00 0.95 0.05 17.8
12 308.59740 60.263989 308.48380 60.263989 308.48380 60.207455 308.59740 60.207455 18 41422.6 0.00+0.22
-0.00 0.00+1.11
-0.00 0.00+1.22
-0.00 2.01+0.00
-1.98 0.00+1.22
-0.00 0.00+1.20
-0.00 0.00+2.76
0.00 5.80+0.21
5.80 0.95 0.30 18.9
14 308.54060 60.136787 308.51220 60.136787 308.51220 60.122654 308.54060 60.122654 7 2588.9 0.23+0.10
-0.20 0.00+0.78
-0.00 0.00+0.64
-0.00 0.00+0.78
-0.00 0.00+0.82
-0.00 0.00+0.09
-0.00 0.00+1.88
0.00 0.00+4.20
0.00 1.35 0.00 4.4
15 308.54060 60.150921 308.51220 60.150921 308.51220 60.136787 308.54060 60.136787 60 2588.9 4.29+0.81
-1.76 0.00+7.40
-0.00 0.00+5.96
-0.00 10.0+0.07
-9.62 0.00+8.68
-0.00 0.00+13.9
-0.00 17.5+3.98
17.5 0.00+31.6
-0.00 1.20 1.05 13.2
16 308.54060 60.165054 308.51220 60.165054 308.51220 60.150921 308.54060 60.150921 54 2588.9 1.95+0.27
-1.05 0.00+3.11
-0.00 0.00+3.32
-0.00 2.42+1.80
-2.42 0.00+2.81
-0.00 3.37+2.82
-2.83 0.00+5.23
0.00 11.9+10.1
8.37 1.40 0.00 17.5
17 308.54060 60.179188 308.51220 60.179188 308.51220 60.165054 308.54060 60.165054 40 2588.9 1.15+0.69
-0.67 0.00+4.72
-0.00 7.30+2.64
-4.74 0.00+5.01
-0.00 0.00+6.28
-0.00 13.4+0.00
-11.7 0.00+15.6
0.00 0.00+46.4
0.00 1.50 0.05 11.3
18 308.56900 60.108520 308.54060 60.108520 308.54060 60.094387 308.56900 60.094387 41 2588.9 0.00+0.91
-0.00 0.00+14.1
-0.00 57.3+0.21
-27.7 0.00+16.6
-0.00 0.00+13.3
-0.00 0.00+11.0
-0.00 0.00+16.7
0.00 0.00+37.0
0.00 1.50 1.50 7.1
19 308.55480 60.115587 308.54060 60.115587 308.54060 60.108520 308.55480 60.108520 3 647.2 0.12+0.11
-0.12 0.00+0.76
-0.00 0.00+0.67
-0.00 0.00+0.62
-0.00 0.00+0.53
-0.00 0.00+0.82
-0.00 1.24+2.02
0.92 0.00+2.81
0.00 1.40 0.00 16.6
20 308.55480 60.122654 308.54060 60.122654 308.54060 60.115587 308.55480 60.115587 6 647.2 0.00+0.24
-0.00 0.00+1.21
-0.00 2.25+0.52
-1.83 0.00+1.29
-0.00 0.00+1.17
-0.00 0.00+1.23
-0.00 0.00+1.96
0.00 0.00+4.79
0.00 1.45 0.00 7.1
Note: Some of the values have been rounded to save space. The machine readable table provided will have the full precision. Area listed is not corrected for inclination. i= index; N = number of stars
We integrate the time bins to calculate the total mass formed over the last 25 Myr. In the left panel of Figure <ref>, we show the resulting mass surface density map. A majority of the recently formed mass is in the spiral arms, though there is a significant population of young stars outside of the spiral arms. The spatial distribution of the mass formed traces the resolved UV photometry of the galaxy fairly well. Our data and mass map look far more extended, particularly in the northwest and southeast arms, than the GALEX color image (right panel of Figure <ref>) due to the increased sensitivity of our data (Figure <ref>). Our methods seem to be more sensitive to older star formation than that of GALEX (which would probe <10Myr), as these features appear most prominently in the 16-20 Myr time bins of Figures <ref> and <ref>.
§.§ Extinction Maps
We recovered foreground extinction fairly uniform with mean of 1.4 and standard deviation of 0.3 (Figure <ref>, left panel). This is slightly higher than the extinction of A_V=0.938 from <cit.>. We present the differential extinction map in the right panel of Figure <ref>. The areas of high differential extinction are in the spiral arms and appear to be very clumpy. Approximately 17% of the cells have high measured differential extinction (dA_V>1), which could mean that we cannot detect some of the older stars in those grids. For more detail, see Section <ref>. The measured differential extinction show no correlation with ages measured in Section <ref>.
§.§ Global SFH
To derive a global star formation history for the galaxy, at each time bin, we integrate the SFR over all cells. We calculate the uncertainties due to the number of stars in the cells by adding the uncertainties of each spatial bin in quadrature. Then we bootstrap the uncertainties across spatial bins by sampling the number of cells 10,000 times with replacement to account for uncertainties due to binning the stars into cells. We then add the uncertainties obtained via bootstrapping to the random uncertainties in quadrature. We present these global star formation rates in Table <ref> and plot them in Figure <ref>.
To obtain the star formation rates in the past 10 Myr, we integrated the star formation rates over time to obtain total mass formed in the past 10 Myr, then divided that total mass by 10 Myr. We calculated the uncertainties by adding the uncertainties of each time bin in quadrature. We find the global star formation rate over the past 10 Myr to be roughly constant at 5.31+0.88
-0.78 M_⊙/yr, shown in Figure <ref>. We did the same for the global star formation rate 16-25 Myr ago, obtaining an SFR of 23.38+4.65
-4.25 M_⊙/yr. The SFR 16-25 Myr ago was roughly five times larger than the current (≤ 10 Myr) star formation rate, with a monotonically decreasing SFR in the 6 Myr in between the two epochs.
§.§ Reliability of the Younger Time Bins
As with SFR measurement techniques that rely on UV data, dust is a big challenge to measuring young star formation. Stars are born from giant molecular clouds and are obscured by dust until a massive star forms, ionizes its birth cloud, clears out the material <cit.>. This makes young stars incredibly challenging to observe. Our technique requires that the young stars are observable to measure star formation. The uncertainties due to dust for SFRs older than 8 Myr decrease significantly. Young stellar clusters emerge from the giant molecular clouds on timescales of 8 Myr, with very few stars remaining embedded after that <cit.>. Additionally, upon visual inspection of WISE and GALEX images (Figures <ref> and <ref>, respectively), the star formation in the 0-5 Myr time bin seems well recovered. This suggests that it is unlikely that a significant fraction of the star formation is being missed, though additional measurements of SFR using infrared are necessary better constrain the impacts of dust.
We check the impact of the blends on the reliability of the SFR in the young time bins. Our choice to include the 16 sources flagged as blends in 14 cells had been made to include as many young stars as possible in our MATCH fits. We ran MATCH again on two of these cells removing the blends from the observed CMD. We find that there is no impact on the measured SFH of these two cells. However, without the blends, the uncertainty of the SFR in the youngest time bin decreased by two orders of magnitude. The inclusion of the blends in our CMD-fitting gives us more conservative uncertainties on the SFRs of the younger time bins.
Additionally, we compare the SFR measurements to that from the literature. The current SFR is consistent with SFRs measured via methods probing the youngest stars. Previous measurements of the SFR within the last 5 Myr from Hα measurements find an SFR of ∼ 4 M_⊙/yr (no reported uncertainties, <cit.>) and 5.7 ± 1.7 M_⊙/yr <cit.>, which both agree with our measurement of 4.93 +0.22
-0.23 (Table 4, row 1) within uncertainties. <cit.> measured a ≲ 5 Myr SFR of ≃ 7.1 M_⊙/yr with Hα and 24 μm observations, with no uncertainties reported. This higher star formation rate is more consistent with our SFR in the 5-6.3 Myr time bin of 7.21 +0.58
-0.52 ((Table 4, row 2). Measurements of the SFR obtained with far-ultraviolet (FUV) tend to probe timescales over the past 10 Myr. <cit.> measures a FUV SFR 9.1 ± 2.7 M_⊙/yr, which is more consistent with our measurements of the SFRs 10-16 Myr ago, 7.33 +0.70
-0.66 and 12.81 +1.04
-0.95 (Table 4, rows 5-6).
§.§ Reliability of Older Time Bins
Another challenge of utilizing UV data to measure star formation rates is that UV primarily probes young star formation, as seen in Figure <ref> where the log(age)=7.4 isochrone lies very close to the completeness limit of our data. This limitation is reflected in the high uncertainties (at least an order of magnitude higher than the rest of the time bins) in the star formation rate in the oldest time bin, in Table <ref>. Thus we exclude the log(age)=7.4-7.5 time bin from the our analysis in the paper. In addition, we perform several tests to check the reliability of the SFR in the two oldest time bins (log(age)=7.2-7.3 and log(age)=7.3-7.4) by simulating model CMDs from the resulting SFRs.
We perform tests to check the impact of our chosen completeness limits on our measured star formation rates in the two oldest time bins. First, we check that the model CMDs created accurately model the observed CMDs for a selection of cells at varying stellar densities. We create these model CMDs by using the same parameters (i.e. completeness limits, binary fraction) we used to fit the star formation histories, as well as the output best fit metallicities and star formation rates from our results. We then check that the model has enough stars in the oldest two time bins log(age)=7.2-7.3 and log(age)=7.3-7.4 to allow for good measurements.
We also test for density-dependent effects which might arise due to the brighter completeness limit in the highest density regimes. We want to ensure that the measured SFRs of the oldest two time bins is consistent with the number of stars in their observed CMDs and reflect an accurate measurement. We check this by performing two tests. First, to measure the minimum expected percentage of older stars, we simulate constant star formation histories at different constant SFRs to measure the percentage of older (log(age)=7.2-7.4) stars out of the total (log(age)=6.6-7.4) at varying stellar densities. To check the impact of the input SFR on the percentage of older stars, we choose an SFR of 13.17 M_⊙/yr (the average global SFR), 0.1 M_⊙/yr (the highest measured SFR of a cell), and 0.005 M_⊙/yr (an average SFR value in a cell). We present the resulting systematics from these tests below (Table <ref>). There is some stochasticity in the percentage of modeled older stars for a constant SFH at SFR=0.005 M_⊙/yr likely due to the small numbers of stars in the model. Second, we measure the percentage of observed stars that fell in the two oldest time bins for varying stellar densities in a selection of spatial cells that have a spike in SFR in the older time bins. For the highest stellar densities (≥ 12 stars/arcsec^2), the log(age)=7.2-7.3 and 7.3-7.4 time bins contain 20 ± 10% of the total number of stars, whereas the lower stellar density bins have 26 ± 10% of the total number of stars. Fortunately, there are only 5 cells that have these high densities, which is not significant enough to impact the high measured global SFR in the older time bins. The lower density bins have a sufficient percentage of stars above the minimum expected percentage to have the accurate SFRs measured in the oldest time bins, appearing to be less impacted by the completeness limit concerns than the 5 cells in the high density regime.
Finally, we compare the SFR measured in the oldest time bins to that obtained via supernova rates, which probes older star formation, from 30 Myr (assuming single star evolution) to 100 Myr (assuming binary star evolution; <cit.>) and requires an SFR of at least 12.1 ± 3.7 M_⊙/yr <cit.>. We convert the measured SFR, ψ(t), of the two oldest time bins to an estimated core-collapse supernova rate, R(t)_CC. We utilize the formalism from <cit.>, using the same assumptions (all stars in the suitable mass range m_u^CC-m_l^CC supernova, the number fraction of stars that supernova in the time range and number of stars per unit mass of the stellar generation do not vary with time, since the time range we are looking at is very short) and resulting equation (Equations <ref> and <ref>) from <cit.>.
R(t)_CC = K_CC×ψ(t)
where
K_CC = ∫_m_l^CC^m_u^CCϕ(m) dm/∫_m_l^m_u mϕ(m) dm
For consistency, we choose the IMF, ϕ(m) ∝ m^-α_i, to be the Kroupa IMF (α_0 = +0.3, 0.01 ≤ m/M_⊙ < 0.08; α_1 = +1.3, 0.08 ≤ m/M_⊙ < 0.50; α_2 = +2.3, 0.50 ≤ m/M_⊙), <cit.>). We utilize the minimum progenitor masses to supernova for log(age)=7.2-7.3 to be 11.73 M_⊙ and log(age)=7.3-7.4 for 10.28 M_⊙ from the Padova stellar evolutionary models <cit.> used in the calculation of the SFH.
For log(age)=7.2-7.3, we derive a core-collapse supernova rate of 0.15 +0.02
-0.02 SN/yr. For log(age)=7.3-7.4, a ccSN rate = 0.19+0.03
-0.02 SN/yr, with uncertainties only propagated from SFR in the two time bins. This doesn't account for the systematic uncertainties due to the IMF and estimated progenitor mass. We estimate the uncertainty due to choice of IMF by comparing the ccSN rates obtained with the high-mass IMF from <cit.>. The ccSN rate obtained for log(age)=7.2-7.3 is 0.09+0.01
-0.01 SN/yr and 0.12+0.02
-0.02 SN/yr for log(age)=7.3-7.4, which is more aligned with the observed supernova rate of 0.1 SN/yr <cit.>. A 6% change in the slope of the IMF changes the ccSN rate by 60%, dominating the uncertainty of this calculation. We estimate the uncertainty due to progenitor mass by selecting progenitor masses from two additional stellar evolutionary models, PARSEC <cit.> and Geneva <cit.>, and taking the standard deviation the ccSN rate obtained with the three different models. The uncertainty due to choice of progenitor mass is 0.7% and 4% for log(age)=7.2-7.3 and log(age)=7.3-7.4, respectively.
§ DISCUSSION
In Section <ref>, we analyze two regions with high recent star formation and present their star formation histories. Finally, in Section <ref>, we examine the relationship between stellar density and age.
cccccccc[h]
5
Percentage of Modeled Older Stars for Simulated Constant SFH at SFR of 13.17 M_⊙/yr (the average global SFR), 0.1 M_⊙/yr (the highest measured SFR of a cell), and 0.005 M_⊙/yr (an average SFR value in a cell)
45pt
Density Bin SFR = 13.17 SFR = 0.1 SFR = 0.005
[stars/arcsec^2] M_⊙/yr M_⊙/yr M_⊙/yr
0-2 18% 18% 25%
2-4 18% 19% 15%
4-6 18% 18% 14%
6-8 17% 18% 15%
8-10 17% 17% 20%
10-11 17% 16% 10%
11-12 18% 17% 22%
12-13 15% 11% 3%
13-18 14% 11% 19%
ccc[h]
4
Global SFR over Time
45pt
Time Bin
Time Bin SFR
[Myr]
[log(yr)] [M_⊙/yr]
4-5 6.6-6.7 4.93 +0.22
-0.23
5-6.3 6.7-6.8 7.21 +0.58
-0.52
6.3-8 6.8-6.9 4.37 +0.42
-0.36
8-10 6.9-7.0 5.80 +0.45
-0.40
10-12.5 7.0-7.1 7.33 +0.70
-0.66
12.5-16 7.1-7.2 12.81 +1.04
-0.95
16-20 7.2-7.3 22.84 +2.67
-3.18
20-25 7.3-7.4 23.82 +3.81
-2.81
§.§ Local Star Formation History
We analyze two regions with high star-forming activity, the Hodge Complex <cit.> roughly centered at 20:34:34.80 +60:08:18.60 and the HII region at the tip of the northeast spiral arm roughly centered at 20:35:22.57 +60:10:14.70. The cells of the regions of interest used in this analysis are flagged in the machine readable table.
To obtain the star formation histories of these regions, we sum the star formation rates of each cell in the region per time bin. We then add the random uncertainties in quadrature. The locations of the regions and their star formation histories are presented in Figure <ref>.
The Hodge Complex is a super star cluster containing multiple young star clusters and has been extensively studied due to the high concentration of star formation <cit.>. We present the star formation history of this region in the lower left corner of Figure <ref>. This region appears to have constant star formation over the past 6.3-25 Myr with a peak in star formation in around 5-6.3 Myr and drop in star formation in the most recent 5 Myr. Interestingly, despite being a mere 1295 square arcseconds, which is 0.05% of the total size of our coverage area, this tiny region contains 3% of the total mass formed up to 6.3 Myr and 1.8% of the total mass formed up in the past 25 Myr. This region has had a more recent star formation episode than seen for the globally decreasing star formation rate.
Another region of interest is the HII region in tip of the northeast spiral arm. Unlike the Hodge Complex, the star formation history of this region roughly follows the star formation history of galaxy. Similar to the Hodge Complex, this small region of the galaxy contains a significant portion of the recent star formation in the galaxy. Despite it being 0.05% of our total coverage (roughly 1334 square arcseconds), this region contains 5.6% of total mass of NGC 6946 formed in the last 6.3 Myr and 3.9% over the last 25 Myr. There is a large peak in older star formation 20-25 Myr ago relative to the flatter SFH in the past 20 Myr. Since the global SFH has a more gradual decrease in SFR over time, we check that this peak in SFR is not due to a systematic related to completeness. We check SFRs of the locations with the highest stellar density and find 1.5% of SFR in oldest time is attributed to those high density regions, making an insignificant contribution to the high SFR.
Ultimately, these regions of interest are only a small portion of the young star formation in NGC 6946. This points to overall star formation across the galaxy contributing to the peak in the global SFH.
§.§ Density versus Age
Initially, we measured the characteristic age of the population in each cell by randomly sampling their star formation histories 50,000 times. We then obtain 16th, 50th, and 84th percentile time bins. Due to the the double peak distribution of many of the star formation histories, we found no correlation between stellar density and characteristic age for this population.
Thus instead of looking at the characteristic age of the population, we find the youngest age bin in which we detect star formation in each cell, using the following criteria: the SFR of the age bin is at least 3% of the total star formation rate, the age bin contains 1% of the total mass formed, and and the lower bound SFR uncertainty of that time bin must be greater than or equal to zero. In Figure <ref>, we see that the youngest detected population are denser, with the stellar density remaining between 0-2.5 stars/arsec^2 after 12 Myr. A possible explanation is that stars have migrated from their initial site of formation over 12 Myr to populate the field. This is consistent with the timeline of stars emerging for their giant molecular clouds by roughly 8 Myr (see Section <ref>) then populating the field over time. More work will need to be done to confirm that stellar migration is being observed. We plan to model this in our future work.
§ CONCLUSIONS
In this paper, we presented the spatially-resolved star formation history of NGC 6946 in the last 25 Myr, measured using resolved NUV stellar photometry. We implemented a quadtree algorithm to devise a spatial grid, and we measured the SFH independently for each cell using CMD-fitting. We summarize our main findings below.
* We measure the global SFR over the last 25 Myr to be 13.16 +0.91
-0.79 M_⊙/yr.
* 16-25 Myr ago, the SFR was 23.38+2.43
-2.11 M_⊙/yr. The SFR then monotonically decreases between 10-16 Myr, reaching a steady recent SFR in the past 10 Myr of 5.31+0.18
-0.17 M_⊙/yr.
* We present the star formation histories of the Hodge Complex and the HII region at the tip of northeast spiral arm. Both contain a higher amount of recent star formation than expected for regions of their size. The Hodge Complex shows more recent star formation relative to the declining global star formation.
§ ACKNOWLEDGEMENTS
We thank the reviewer for their helpful comments that improved the paper. This research is based on observations made with the NASA/ESA Hubble Space Telescope obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5–26555. These observations are associated with program GO-15877.
HST(WFC3/UVIS)
astropy <cit.>, DOLPHOT <cit.>, GeoPandas <cit.>, <cit.>, Matplotlib <cit.>, NumPy <cit.>, Pandas <cit.>.
aasjournal
|
http://arxiv.org/abs/2307.05312v1 | 20230711145937 | On the thermoelastic coupling of anisotropic laminates | [
"Paolo Vannucci"
] | math.AP | [
"math.AP",
"math-ph",
"math.MP"
] |
Preprint
45
3
0.85
1.0
font=normalsize
plain
*teoTheorem
|
http://arxiv.org/abs/2307.04957v1 | 20230711012009 | Reinforcement Learning with Non-Cumulative Objective | [
"Wei Cui",
"Wei Yu"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.NI",
"math.OC",
"stat.ML"
] |
op-tical net-works semi-conduc-tor IEEE-Xplore
plain
theoremTheorem
proposition[theorem]Proposition
lemmaLemma
corollary[theorem]Corollary
definition
definition[theorem]Definition
assumption[theorem]Assumption
remark
remark[theorem]Remark
subproof[1][]
Reinforcement Learning with
Non-Cumulative Objective
Wei Cui, Student Member, IEEE, and Wei Yu, Fellow, IEEE
Manuscript submitted on November 10, 2022, revised on August 12, 2023. This work is supported by Natural Sciences and Engineering Research Council (NSERC) of Canada via the Canada Research Chairs Program.
The authors are with The
Edward S. Rogers Sr. Department of Electrical and Computer Engineering,
University of Toronto, Toronto, ON M5S 3G4, Canada
(e-mails: {cuiwei2, weiyu}@ece.utoronto.ca).
October 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In reinforcement learning, the objective is almost always defined as a cumulative function over the rewards along the process. However, there are many optimal control and reinforcement learning problems in various application fields, especially in communications and networking, where the objectives are not naturally expressed as summations of the rewards. In this paper, we recognize the prevalence of non-cumulative objectives in various problems, and propose a modification to existing algorithms for optimizing such objectives. Specifically, we dive into the fundamental building block for many optimal control and reinforcement learning algorithms: the Bellman optimality equation. To optimize a non-cumulative objective, we replace the original summation operation in the Bellman update rule with a generalized operation corresponding to the objective. Furthermore, we provide sufficient conditions on the form of the generalized operation as well as assumptions on the Markov decision process under which the globally optimal convergence of the generalized Bellman updates can be guaranteed. We demonstrate the idea experimentally with the bottleneck objective, i.e., the objectives determined by the minimum reward along the process, on classical optimal control and reinforcement learning tasks, as well as on two network routing problems on maximizing the flow rates.
Reinforcement Learning, Optimal Control, Markov Decision Process, Wireless Network, Routing.
§ INTRODUCTION
In reinforcement learning (RL), an agent performs a sequence of actions to optimize a certain objective, over an environment modeled as a Markov decision process (MDP) <cit.>. The objective value is determined by the collection of intermediate rewards the agent receives until the MDP is terminated (or an absorbing state is reached). In most of the literature, the objective is defined as the summation of these intermediate rewards, which corresponds to the summation operation in the Bellman optimality equation <cit.> when computing the value function. Such cumulative objectives indeed capture the ultimate goals for many problems, such as Atari games <cit.>, stock trading <cit.>, advertisement placements <cit.>, and so on. Nonetheless, there are many problems with objectives that do not translate to summations of rewards.
Specifically, in the field of wireless communications, there are many system optimization problems that can be formulated and decomposed into sequences of optimization decisions, whose global objectives cannot be readily expressed as summations of rewards from individual optimization decisions. Examples of such problems include but are not limited to max-min optimizations in routing and resources allocation <cit.>, harmonic mean maximization for traffic engineering <cit.> and for transmission system optimization <cit.>, the proportional fairness optimizations for wireless communications <cit.>, and so on. In this paper, we recognize the prevalence of problems with non-cumulative objectives, and propose modifications to many existing optimal control and RL algorithms for optimizing such objectives[The code for this paper is available at: https://github.com/
].
In the optimal control or reinforcement learning literature, one class of problems with non-cumulative objectives are the problems where only terminal states matter, such as the the game of Go <cit.> or Chess <cit.>. Researchers managed to cast the objectives into summations of rewards, by assigning every reward a zero value except for the terminal reward. Problems seeking fast task completions form another class of examples, such as maze navigation or the mountain-car control task <cit.>. Researchers cast the objectives as cumulative rewards by assigning a penalty for each action the agent takes before reaching the destination <cit.>. There are also researches on objectives that are not easily cast into summations, such as the objectives as the average reward <cit.>. To optimize the average reward, besides computing the summation of rewards, the number of steps is either tracked explicitly <cit.>, or taken to the limit at infinity (for cyclic non-terminating MDPs) <cit.>. Regardless, the summation operation in the Bellman optimality equation remains in these proposed algorithms. There have been two works <cit.> exploring maximum-reward objectives, with applications on financial derivatives and medicine design. These works recognize the possibility of modifying the Bellman optimality equation, however their scopes are restricted to the maximum-reward objective formulation, instead of generalizing to a larger class of objective functions or proposing universal conditions for convergence. Furthermore, for MDPs whose state transition is a stochastic function of the input, the convergence to the global optimal policy cannot be guaranteed for the approach in <cit.> and <cit.>.
In this paper, we generalize the optimal control and reinforcement learning objectives to a variety of functions over the intermediate rewards. To optimize the generalized objectives, we exploit the flexibility in the Bellman optimality equation and modify it accordingly to the generalized objective functions. Specifically, we replace the summation operation in the Bellman optimality equation by new operations catering to the non-cumulative objective functions. Through this approach, we can readily adapt the existing optimal control or reinforcement learning algorithms to optimizing non-cumulative objectives, without needing to re-engineer a new set of artificial rewards just to cast the objectives into a summation of rewards. Furthermore, we provide the theoretical analysis on the generalized Bellman updates, and propose sufficient conditions on the form of the new operation as well as the assumptions on the MDP under which the global optimality of the converged value function and the corresponding greedy policy can be guaranteed.
By expanding the possibilities of the objective functions, we are now able to solve problems with objectives that are intrinsically non-cumulative. For experiments, we focus on the bottleneck objective: the objective as the minimum reward of all intermediate rewards. To optimize bottleneck objectives, we replace the summation operation in the Bellman optimality equation by the minimization operation, and apply the generalized Bellman update rule to learn the value function. In numerical simulations, we first re-formulate two classical reinforcement learning problems: the CartPole problem <cit.> and the Atari game Breakout, with bottleneck objectives. Through optimizing these problems with the proposed generalized Bellman updates, we obtain competitive performances by policies with different strategies from the classical solutions.
We further experiment on two network communication applications with bottleneck objectives: the problem of finding the single-path maximum flow on a directed graph as an optimal control task, as well as joint routing and spectrum access over a wireless ad hoc network as a reinforcement learning problem. The proposed approach achieves excellent performances on both problems that are otherwise difficult to solve using the conventional formulation and learning algorithms. Specifically, for the wireless ad hoc network problem, a prior work <cit.> has explored the Monte-Carlo estimation approach for learning the value function. In contrast, the proposed generalized update rule allows for the adaptation of the highly efficient temporal difference learning technique <cit.> to the generalized objective formulation, which results in noticeably faster and more stable learning progress. Furthermore, as the wireless ad hoc network problem is essentially a multi-agent reinforcement learning (MARL) problem, the results obtained also suggest that the proposed approach is readily compatible and effective under the multi-agent reinforcement learning setting.
The rest of the paper is organized as follows. In Section <ref>, we introduce the general problem description on optimizing non-cumulative objectives, as well as several examples where non-cumulative objectives are applicable. In Section <ref>, we formally propose the method of the generalized Bellman update rules, and provide theoretical convergence and optimality analysis. We provide the detailed problem formulations on several example applications, and elaborate on how the proposed generalizations can be applied to optimizing such specific applications in Section <ref>, followed by the numerical simulations and analysis of the results in Section <ref>. Lastly, we draw conclusions in Section <ref>.
§ GENERALIZED OPTIMAL CONTROL & REINFORCEMENT LEARNING FORMULATION
§.§ Conventional Formulation
Let 𝒮 and 𝒜 denote the state space and the action space of an MDP. At time step t, the agent observes a state s_t∈𝒮, executes an action a_t∈𝒜, and receives a reward r_t∈ℛ while transiting to the next state s_t+1∈𝒮. We use {p_R_t|S_t,A_t(r_t|s_t,a_t)}_t=1,2… and {p_S_t+1|S_t,A_t(s_t+1|s_t, a_t)}_t=1,2… to denote the reward distribution and the state transition distribution of the MDP, but often omit the subscripts for notational simplicity, e.g., as in {p(r_t|s_t,a_t)}_t=1,2… and {p(s_t+1|s_t,a_t)}_t=1,2…. In most of the literature, the objective is defined as the summation of all intermediate rewards the agent received along the process:
u = r_1+γ r_2+γ^2 r_3+… ,
where γ∈(0,1) is the discount factor to encourage the agent focuses more on rewards closer in time. The study of control (when both the reward distribution p(r_t|s_t,a_t) and the state transition distribution p(s_t+1|s_t,a_t) are known) or reinforcement learning (when neither p(r_t|s_t,a_t) nor p(s_t+1|s_t,a_t) is known) is to find a policy π for the agent to select actions based on states as a_t∼π(s_t),∀ t, such that u in <ref> is optimized.
Corresponding to <ref>, the value function is defined as the future cumulative rewards the agent expect to receive under a specific policy. Let 𝒱={(s,a) | s∈𝒮, a∈𝒜} denote the set of all possible state-action pairs. The value function Q^π∈ℛ^|𝒱| is a vector containing the future cumulative rewards expected starting from each (s,a) tuple, with:
Q^π(s_t, a_t)
= 𝔼_{p(r_t'|s_t',a_t')}_t'=t,t+1…
{p(s_t'+1|s_t',a_t')}_t'=t,t+1…
{a_t'+1∼π(s_t'+1)}_t'=t,t+1…[r_t+γ r_t+1+γ^2 r_t+2+… | s_t,a_t ]
= 𝔼_p(r_t|s_t,a_t)
p(s_t+1|s_t,a_t)
a_t+1∼π(s_t+1)[r_t+γ Q^π(s_t+1,a_t+1) | s_t,a_t] .
Q ^π(s_t, a_t)
= 𝔼_{p(r_t'|s_t',a_t')}_t'=t,t+1…
{p(s_t'+1|s_t',a_t')}_t'=t,t+1…
{a_t'+1∼π(s_t'+1)}_t'=t,t+1…[r_t+γ r_t+1+γ^2 r_t+2+… | s_t,a_t ]
= 𝔼_p(r_t|s_t,a_t)
p(s_t+1|s_t,a_t)
a_t+1∼π(s_t+1)[r_t+γ Q^π(s_t+1,a_t+1) | s_t,a_t] .
As shown in <cit.>, for stationary single-agent fully-observable MDPs, there exists a deterministic global-optimal policy π^*, with its value function denoted as Q^*, with the following relationship:
π^*(s_t)=argmax_aQ(s_t,a)
Essentially, π^*(s_t) is a deterministic distribution with all its probability density on the single action that maximizes Q(s_t,a_t). Therefore, π^* is commonly referred to as a greedy policy.
For optimal control, Q^* can be computed by <ref> with π being the global optimal greedy policy π^*, leading to the Bellman optimality equation:
Q^*(s_t, a_t) =𝔼_p(r_t|s_t,a_t)
p(s_t+1|s_t,a_t)[r_t+γmax_a_t+1Q^*(s_t+1,a_t+1) | s_t,a_t] .
Q ^*(s_t, a_t)
=𝔼_p(r_t|s_t,a_t)
p(s_t+1|s_t,a_t)[r_t+γmax_a_t+1Q^*(s_t+1,a_t+1) | s_t,a_t] .
Meanwhile, for reinforcement learning, Q^* is learned through iterative updates of sample-based approximations to <ref>, known as the Bellman update:
Q(s_t, a_t) ← r_t+γmax_a_t+1Q(s_t+1, a_t+1) ,
where the superscript on Q is dropped, since during these updates, Q does not necessarily correspond to the value function of any policy. We note that in Bellman updates, as shown by <ref>, the updated estimations for Q are obtained through bootstrapping from the current estimations. This learning technique is commonly known as temporal difference learning <cit.>, which enjoys low estimation variance and high learning efficiency. <ref> is used directly in the value-based algorithms such as SARSA <cit.>, Q-learning <cit.>, with the process commonly referred to as value iteration; and policy-based algorithms such as the class of Actor-Critic methods <cit.>.
§.§ Generalized Non-Cumulative Objectives
While it is proper to express the objective as <ref> in many scenarios, there exist applications where the objective u is intrinsically some other function over the intermediate rewards. In this paper, to generalize the class of objectives that can be optimized, we formulate the objectives as general functions over intermediate rewards:
u=f(r_1, r_2,r_3,…) .
Examples for such objectives can be seen from a wide variety of problems, which include, but are not limited to, the following classes of problems:
* The bottleneck of the intermediate rewards along the process, which fits into the large class of max-min optimization problems <cit.>. Among these max-min optimizations, the network routing problems are perhaps the most standout examples.
* The largest reward among the intermediate rewards along the process <cit.>.
* The harmonic mean of the intermediate rewards along the process, such as the average traveling velocity, electrical resistance in circuits, density of mixture. It has also been used in wireless communications as a measure of fairness among users <cit.>.
Among various non-cumulative objectives, the objective of the bottleneck reward is particularly prevalent. An important class of problems with bottleneck objectives are the network routing problems. Consider a data flow in a communication network consisting of multiple links, the highest rate the flow supports is the rate of the bottleneck link (i.e. the link with the lowest rate). Correspondingly, network routing problems are best formulated by the bottleneck objective. We describe such problems in detail in Section <ref>.
§ LEARNING ALGORITHMS WITH GENERALIZED BELLMAN UPDATES
This section aims to generalize optimal control and reinforcement learning to MDPs with non-cumulative objectives as <ref> by modifying the operation within the Bellman updates in <ref>. We present sufficient conditions on the modified operation as well as assumptions on the underlying MDPs such that the Bellman updates still maintain the global optimal convergence property. Furthermore, we provide examples of frequently-encountered non-cumulative objectives with corresponding operations that satisfy the conditions for convergence.
§.§ Bellman Update with Generalized Operations
Observing <ref>, the update target of the new iteration consists of three fundamental elements:
* Intermediate reward r_t,
* Value function at next state-action pair Q(s_t+1, a_t+1),
* Summation operation to combine <ref> and <ref>.
In this paper, we explore substitutions of <ref> in the Bellman optimality equation and its update rule by an alternative computational operation, which we refer to as the generalized Bellman update operation, denote by g(·,·). The operation takes <ref> and <ref> as the two arguments. As the result, we generalize <ref> to the following form:
Q(s_t, a_t) ← g(r_t, γmax_a_t+1Q(s_t+1, a_t+1)).
Through this generalized Bellman update operation, we are able to adapt the highly efficient temporal difference learning technique, as well as many popular reinforcement learning algorithms that based on it (e.g. SARSA, Q-learning, Actor-Critic), to optimizing the non-cumulative objectives, with minimal changes to these algorithms.
To determine which generalized objective functions f(⋯) as per <ref> can be optimized with such generalized Bellman updates, the first criterion is that the objective function needs to have optimal substructure <cit.>, as a fundamental requirement of dynamic programming. Furthermore, for learning based algorithms with value function approximators (such as neural networks), it is desirable to have the value function Q with fixed-dimension outputs from state to state (under most scenarios, the value function is a scalar function). This corresponds to the requirement that the objective function should be computable by iteratively updating a fixed number of statistics over its arguments (i.e. the intermediate rewards). When the two requirements are satisfied, we can deduce the proper operation g(·,·) from the objective function f(⋯) on a case-by-case basis.
§.§ Conditions for Convergence
To facilitate the theoretical analysis, we denote each step of the value iteration by the function mapping F^π:ℛ^|𝒱|→ℛ^|𝒱|. The superscript π indicates that the policy π is used for action selection in the one-step look ahead target computation. Correspondingly, we have:
(F^π Q)(s_t, a_t) = 𝔼_p(r_t|s_t,a_t)
p(s_t+1|s_t,a_t)
a_t+1∼π(s_t+1)g(r_t, γ Q(s_t+1, a_t+1)),
where the expectation is understood as conditioned under (s_t, a_t). When the deterministic greedy policy as derived from the current Q is used, the value iteration is denoted by F^* as follows:
(F^*Q)(s_t, a_t) = 𝔼_p(r_t|s_t,a_t)
p(s_t+1|s_t,a_t)g(r_t, γmax_a_t+1Q(s_t+1, a_t+1)).
The original Bellman updates in <ref> enjoy convergence to the global optimal value function, as shown in <cit.>. To generalize this convergence property for the generalized updates as <ref>, we present a sufficient condition on g(·,·) for ensuring convergence to a unique value function in the following theorem.
On a single-agent fully observable MDP, the series of value functions obtained from iteratively applying the generalized Bellman update rule Q← F^*Q as in <ref> is guaranteed to converge to a unique convergence point in ℛ^|𝒱| from any arbitrary starting point, if g(·,·):ℛ×ℛ→ℛ satisfies the following condition:
|g(a,b)-g(a,c)|≤|b-c| ∀ a,b,c∈ℛ
The mathematical proof of this theorem is presented in Appendix <ref>.
Note that in <ref>, we do not claim that the greedy policy resulted from the converged value function is the global optimal policy. Besides an additional condition we need on the operation g(·,·) (to be introduced in the next subsection), the main reason is that after generalizing the Bellman update operation to g(·,·), the value function learned through the value iteration process is no longer guaranteed to be the true expectation of the objective value as defined in <ref>, when the state transition functions and reward functions are stochastic. We elaborate on this observation in the following subsection.
§.§ Suboptimality with Stochastic Transitions and Rewards
With the generalized operation g(·,·), we express the objective in (<ref>) with g(·,·) as:
u=f(r_1, r_2, r_3, …) = g(r_1, γ g(r_2, γ g(r_3, …))) .
We have shown a condition on g(·,·) for convergence in <ref> for obtaining Q^*. Nonetheless, we observe that Q^* does not necessarily recover the true expectation of u when stochastic state transitions and rewards are considered. To illustrate this, consider an episode starting from the state s_1. Under the greedy policy π^* derived from Q^*, we take the expectation over p(r_t|s_t,a_t) and p(s_t+1|s_t,a_t) on <ref>, which leads to:
𝔼 _{a_t=π^*(s_t)}_t=1,2...
{p(r_t|s_t,a_t)}_t=1,2...
{p(s_t+1|s_t,a_t)}_t=1,2...[u(r_1, r_2, r_3, …)]
=𝔼_{a_t=π^*(s_t)}_t=1,2...
{p(r_t|s_t,a_t)}_t=1,2...
{p(s_t+1|s_t,a_t)}_t=1,2...[g(r_1, γ g(r_2, γ g(r_3, …)))] .
Meanwhile, starting from t=1, with the converged Q^* obtained from the generalized Bellman updates, we have:
𝔼 _a_1=π^*(s_1)[Q^*(s_1, a_1)]
=𝔼_a_1=π^*(s_1)
p(r_1|s_1,a_1)
p(s_2|s_1,a_1)
a_2=π^*(s_2)[g(r_1, γ Q^*(s_2, a_2))]
=𝔼_a_1=π^*(s_1)
p(r_1|s_1,a_1)
p(s_2|s_1,a_1)
a_2=π^*(s_2)[g(r_1, γ𝔼_p(r_2|s_2,a_2)
p(s_3|s_2,a_2)
a_3=π^*(s_3)[g(r_2, γ Q^*(s_3, a_3))])].
Comparing <ref> and <ref>, for 𝔼_a_1=π^*(s_1)[Q^*(s_1, a_1)] to be equal to the expectation of u under π^*, p(r_t|s_t,a_t), and p(s_t+1|s_t,a_t), we require g(·,·) to be exchangeable with 𝔼_π^*[·], 𝔼_p(r_t|s_t,a_t)[·], and 𝔼_p(s_t+1|s_t,a_t)[·]. With π^* being the deterministic greedy policy as in <ref>, the operation 𝔼_π^*[·] can always be exchanged with g(·,·). However, if p(r_t|s_t,a_t) or p(s_t+1|s_t,a_t) is stochastic, 𝔼_p(r_t|s_t,a_t)[·] or 𝔼_p(s_t+1|s_t,a_t)[·] is not necessarily exchangeable with g(·,·). In this case, <ref> and <ref> can potentially evaluate to different values, and therefore π^* derived from Q^* may be suboptimal.
Under this observation, in order to obtain a global optimality guarantee on the greedy policy π^*, we constrain the scope to deterministic MDPs. Furthermore, we introduce an additional condition on the generalized operation g(·,·) in order to establish global optimality, as formally stated in the following theorem:
Given a non-cumulative objective function u and its corresponding generalized Bellman update operation satisfying the condition <ref> from <ref>, let Q^* denote the convergence point of the value iteration (from iteratively applying the generalized Bellman update rule as in <ref>). For an MDP with deterministic p(r_t|s_t,a_t) and p(s_t+1|s_t,a_t), the greedy policy π^* derived from Q^* is guaranteed to be the global optimal policy, if g(·,·) satisfies the following additional condition:
b≥ c implies g(a,b)≥ g(a,c) ∀ a,b,c∈ℛ
The mathematical proof of this theorem is provided in Appendix <ref>.
We note that the assumptions on MDPs in <ref> are satisfied by a large class of optimal control and reinforcement learning problems: e.g., board games including Go and Chess, a subset of Atari games, the class of network routing problems (such as the problems to be studied in Section <ref>), and so on.
To summarize, given any general MDP, we may generalize its objective function and apply the generalized Bellman update as in <ref> to try to learn its value function. If the generalized update operation satisfies the condition as in <ref> in <ref>, the value iteration is guaranteed to converge to a unique convergence point. Furthermore, if the underlying MPD satisfies the assumptions in <ref>, and the update operation satisfies the condition <ref>, the convergence point is the optimal value function and the greedy policy π^* derived from the value function is guaranteed to be the global optimal policy.
§.§ Examples of Generalized Objectives and Bellman Update Operations
We introduce several widely applicable objectives, and present the corresponding modified Bellman update operations. In , we provide the proofs that these operations satisfy the conditions in <ref> and <ref>.
§.§.§ Bottleneck Reward Objective
The objective u is the minimum (i.e. bottleneck) intermediate reward in the process:
u(r_1, r_2, r_3, …) = min(r_1,r_2,r_3,…).
The corresponding modified Bellman update operation is:
g(r_t, γmax_a_t+1Q(s_t+1, a_t+1))=min(r_t, γmax_a_t+1Q(s_t+1, a_t+1)),
g(r_t, γmax_a_t+1Q(s_t+1, a_t+1))=
min(r_t, γmax_a_t+1Q(s_t+1, a_t+1)),
where the discount factor γ is useful for encouraging the agent to postpone the occurrences of negative rewards that often correspond to undesired or failure outcomes.
The proof that the bottleneck update operation satisfies both conditions in <ref> and <ref> is presented in Appendix <ref>.
§.§.§ Maximum Reward Objective
The objective u is the maximum intermediate rewards within the process:
u(r_1, r_2, r_3, …) = max(r_1,r_2,r_3,…).
The corresponding modified Bellman update operation is:
g(r_t, γmax_a_t+1Q(s_t+1, a_t+1))=max(r_t, γmax_a_t+1Q(s_t+1, a_t+1)).
g(r_t, γmax_a_t+1Q(s_t+1, a_t+1))=
max(r_t, γmax_a_t+1Q(s_t+1, a_t+1)).
The proof that the maximum update operation <ref> satisfies both conditions in <ref> and <ref> follows the same logic as the the proof for the bottleneck update operation shown in Appendix <ref>.
§.§.§ Harmonic Mean Reward Objective
Assuming , and the process is always terminated after a fixed number of steps, the objective u is the harmonic mean of all intermediate rewards within the process:
U(r_1, r_2,…,r_T) = 1/1/r_1+1/r_2+1/r_3+…+1/r_T ,
where we omit the constant reward count. Examples of such applications with harmonic mean objectives include:
* Optimize average traveling speed over a trip consisting of a fixed number of intervals.
* Minimize resistance in a circuit with a fixed number of resistors in parallel connection.
* Optimize mixture density (e.g. alloys) with a fixed number of selections on equal-weight components.
Although technically maximizing <ref> is equally valid as minimizing the summation of inverse of rewards, we present it as an example of a non-cumulative objective function with a modified Bellman update operation (as shown below) that satisfies the proposed convergence conditions.
The corresponding modified Bellman update operation is:
g(r_t, γmax_a_t+1Q(s_t+1,a_t+1)) = 1/1/r_t+1/γmax_a_t+1Q(s_t+1, a_t+1)) .
g(r_t, γmax_a_t+1Q(s_t+1,a_t+1)) =
1/1/r_t+1/γmax_a_t+1Q(s_t+1, a_t+1)) .
The proof that the harmonic mean update operation satisfies both conditions in <ref> and in <ref> is presented in Appendix <ref>.
§ APPLICATIONS OF GENERALIZED REINFORCEMENT LEARNING
§.§ Classical Reinforcement Learning Problems with Bottleneck Objectives
We first re-examine classical reinforcement learning problems, formulated with the bottleneck objectives as introduced in <ref>. In many classical optimal control and reinforcement learning applications, the agent's success is largely based on its ability to avoid failure or defeat. This is particularly the case when the MDPs lack significant intermediate milestones or checkpoints, such as the CartPole problem and the Atari game Breakout. Instead of regarding such tasks as collecting as many rewards as possible, the agent can interpret the tasks with the equally valid strategy of avoiding the worst outcome (corresponding to the lowest reward) as much as possible.
Conventionally, both tasks are formulated with the cumulative objective, each with an incremental rewarding scheme. In the CartPole task, a positive reward is assigned to the agent for every timestep it maintains the pole in the upright position; while in Atari, a positive reward is assigned each time the agent breaks a brick with the bouncing ball.
To formulate the task with the bottleneck objective for such classical tasks, we assign a negative reward to the agent when an undesired or failure event occurs after executing a certain action. For the other actions that do not directly lead to the failure events, we simply assign a zero intermediate reward. In the CartPole task, the agent aims to control the cart to vertically balance the pole. When the pole falls outside a pre-defined angle range, a negative reward is assigned to the agent. Similarly, for the Atari game Breakout, the agent controls the movement of a paddle to catch and reflect a bouncing ball upwards to destroy layers of bricks located above. Each time the agent fails to catch the falling ball with the paddle, it is assigned a negative reward. With the discount factor γ applied on rewards over time steps, the later the negative rewards occur, the higher the bottleneck objective is.
By optimizing the bottleneck objective, the agent is able to learn alternative strategies to these classical problems: For CartPole, the strategy is to prevent the pole from falling for as long as possible. For Breakout, the strategy is to keep the ball in play for a maximized duration through controlling the paddle to constantly catch and reflect the ball, which translates to, although not always most efficiently, maximizing the bricks destroyed and thus achieving competitive game scores.
§.§ Single-Path Maximum-Flow Routing with Bottleneck Objective on a Graph
§.§.§ Problem Setup
Consider a communication network modeled as a directed graph G=(𝒩, ℰ), where the set of nodes 𝒩 corresponds to the transmission nodes, and the set of edges ℰ corresponds to the communication links between the nodes.
A single-path data flow is routed through the network, from a fixed source node n_s∈𝒩 towards a fixed destination node n_t∈𝒩. Each directed edge e^n_i→ n_j∈ℰ from n_i∈𝒩 to n_j∈𝒩 represents the transmission link from n_i to n_j, and is assigned with a link rate capacity r(e^n_i→ n_j)=r_n_i→ n_j. We set r_n_i→ n_j=0 when there is no link from n_i to n_j in the network. The optimal routing problem is that of finding an ordered sequence of relay nodes as transmission hops, to form the route such that the bottleneck rate is maximized:
n_1,n_2,…,n_mmaximize min(r_n_s→ n_1, r_n_1→ n_2,…,r_n_m-1→ n_m, r_n_m→ n_t),
n_1,n_2,…,n_mmaximize min(r_n_s→ n_1, r_n_1→ n_2,…,
r_n_m-1→ n_m, r_n_m→ n_t),
where {n_i}_i∈{1… m} denote the m relay nodes (with the number m adjustable) forming the route of the flow.
§.§.§ Generalized Optimal Control Solution
To find the single-path maximum flow within a given network represented by a directed graph as described above, we formulate the routing process as an MDP: the agent moves along the frontier node of the route, and makes sequential decisions on the selection of the node for each hop, until the destination node is reached. For the state space 𝒮, each state is uniquely identified by the frontier node the agent resides on. Specifically, we use s^n_i∈𝒮 to denote the state that the current frontier node of the partially established route is node n_i. For the action space 𝒜, we use a^n_i→n_j∈𝒜 to denote the action to move from node n_i to node n_j. Lastly, as specified in the problem setup, r_n_i→n_j corresponds to the reward for the action a^n_i→ n_j, which is the link rate capacity of the link from n_i to n_j.
To optimize the objective (<ref>), for each state and action pair (s^n_i, a^n_i→ n_j), the generalized update is as follows:
Q( s^n_i, a^n_i→ n_j) ←min(r_n_i→ n_j, γmax_n_kQ( s^n_j, a^n_j→ n_k)).
Q( s^n_i, a^n_i→ n_j) ←
min(r_n_i→ n_j, γmax_n_kQ( s^n_k, a^n_j→ n_k)).
From the converged Q^*, we obtain the global optimal greedy policy π^* (guaranteed by the results in <ref> and <ref>), following which produces the flow route supporting the global maximal flow rate.
§.§ Wireless Ad hoc Network Routing and Spectrum Access with Bottleneck Objective
§.§.§ Problem Setup
Consider the physical-layer routing problem as discussed in <cit.>. In a wireless ad hoc network with a set of transmission nodes 𝒩, a set of data flows 𝒦 is to be established, each consisted of multiple hops with their own pairs of source and destination nodes. A set of frequency bands ℬ is available for transmission, each with a bandwidth of . We focus on two optimization tasks for these data flows: routing and spectrum access. The task of routing is to select an ordered list of intermediary relay nodes from 𝒩 to form the route for each data flow. The task of spectrum access is to select a frequency band from ℬ for the transmission of each hop in the route of each flow. We represent the route for flow k∈𝒦 as an ordered list denoted by
𝐧^(k):
𝐧^(k)=(n^(k)_0, n^(k)_1, n^(k)_2… n^(k)_m, n^(k)_m+1) ,
where n^(k)_0 and
n^(k)_m+1 represent the fixed source and destination node for flow k, and {n^(k)_i}_i∈{1… m} represent the m relay nodes (with the number m adjustable) forming the route of flow k. We represent the spectrum access solution for flow k as an ordered list denoted by 𝐛^(k), containing the selected frequency band of each hop:
𝐛^(k)=(b^(k)_1, b^(k)_2, b^(k)_3… b^(k)_m+1) ,
where b^(k)_i ∈ℬ denotes the frequency band selected for the i-th hop in the route of flow k, with i∈{1… m+1}. As the global topology of the ad hoc network is not available as inputs, the agents need to learn to infer the network topology during the routing process.
Consider a link from node n_i to node n_j over frequency band b. Let
h_(n_i→ n_j,b)∈𝒞 denote its channel coefficient. The maximum transmission rate of this link is based on
the signal to interference plus noise ratio (SINR) as follows:
SINR_(n_i→ n_j,b) = x_n_i,b|h_(n_i→ n_j,b)|^2p/∑_n_l≠ n_i,n_j
n_l∈𝒩x_n_l,b|h_(n_l→ n_j,b)|^2p + σ^2 ,
r_(n_i→ n_j,b) = Wlog(1+SINR_(n_i→ n_j,b)) .
where p and σ^2 denote the
transmit power of each node and the background noise power on each frequency band. The binary control variable x_n_i,b indicates whether the node n_i is transmitting on the band b or idle. The objective for each flow u^(k) is the transmission rate it supports, which is the bottleneck link rate:
u^(k)= r_min^(k) = min_i=0,1,2,…,mr_(n^(k)_i→ n^(k)_i+1, b^(k)_i+1) .
The global objective u over all data flows is then defined as the average of the bottleneck rates over all data flows:
u = ∑_k∈𝒦u^(k)/|𝒦|
§.§.§ Generalized Reinforcement Learning Solution
For the physical layer routing and spectrum access problem, we assign one agent per data flow, with each agent moving along the frontier node of its flow and making hop-by-hop decisions. With multiple data flows to be jointly optimized, this problem is essentially a multi-agent reinforcement learning problem, with higher complexity than the maximum flow routing problem on a graph as in Section <ref>. By optimizing this problem with the bottleneck objective formulation and the generalized Bellman updates, we demonstrate that the proposed approach is competitive and highly effective in the setting of multi-agent reinforcement learning.
For better parameter efficiency, we only train one set of parameters shared among all agents. We assume the wireless network is only partially observable to each agent, meaning Q^* is no longer guaranteed to be global optimal. Nonetheless, as shown in later simulations, the corresponding π^* is still competitive. We adopt the MDP formulation as in <cit.>. At each step, each agent gathers 4 pieces of information on frequency band b for each of its c closest neighboring nodes: the distance; the distance; the angle between and directions; and the signal interference on the neighbor on . With this information, the agent forms the state s on band b with s∈ℝ^4c,∀ s∈𝒮. For the action space 𝒜, the agent has c+1 actions on band b: one action for connecting with each of the c nodes via b, and one action for reprobing (if none of the c nodes is suitable). We use s^(n_i,b) to denote the state that the frontier node of the partially established flow is node n_i and that the transmission to the next hop uses the band b. We use a^(n_i→ n_j,b) to denote the agent's action to establish the link from node n_i to node n_j using band b, which is assigned the reward as the rate of this link r_(n_i→ n_j,b). During training, these rewards as link rates are computed after the routes are formed.
As the bottleneck rate is not expressible as summations, <cit.> uses the Monte-Carlo method <cit.> for estimating the value function. The key improvement we propose over <cit.> is to utilize the modified Bellman update rule for training the agents in the off-policy fashion, providing higher data efficiency, faster convergence, and better performances. Using <ref>, the generalized updates for training each agent are:
Q( s^(n_i,b), a^(n_i→ n_j,b)) ←min(r_(n_i→ n_j,b), γmax_n_k,b'Q( s^(n_j,b'), a^(n_j→ n_k,b'))).
Q( s^(n_i,b), a^(n_i→ n_j,b)) ←
min(r_(n_i→ n_j,b), γmax_n_k,b'Q( s^(n_j,b'), a^(n_j→ n_k,b'))).
After predicting Q values for all frequency bands, the agent selects the action with the single highest Q value among all bands to establish the new link, which specifies not only the optimal node as the next hop, but also the optimal frequency band for transmission to that node.
§ SIMULATIONS
We experiment on the optimal control and reinforcement learning problems and compare solutions from the conventional Bellman updates and from our proposed generalized Bellman updates. We use following terms to refer to each algorithm:
* Q-Min: Optimal control solution or RL policy based on the value function obtained from the generalized Bellman update rule as <ref> and <ref>.
* Q-Sum: Optimal control solution or RL policy based on the value function obtained from the conventional Bellman update rule as <ref>.
§.§ Classical Reinforcement Learning Problems
We use the double-DQN architecture <cit.> to model the agents. During training, the policy with decaying ϵ is used for collecting experiences, along with prioritized experience replay <cit.> for sampling training batches in each update step.
§.§.§ CartPole Task
To solve the CartPole task with the Q-Min algorithm, when the pole falls outside of the pre-defined angle range (±12^∘ from the up-right position), we assign a negative reward of -1 to the agent. To encourage the agent to postpone negative reward occurrence, we use a discount factor γ=0.95 in <ref>. For learning with the Q-Sum algorithm, we follow the conventional incremental rewarding scheme that has been long used in this task.
We illustrate the agent learning progress under both algorithms in <ref>, where we evaluate each agent's performance, averaged over 25 new episodes, after each 12500 update steps of training. We note that we stick with the conventional cumulative objective for CartPole as the performance metric when visualizing the learning progresses of both algorithms as shown in <ref>, which allows us to compare both algorithms directly. A competitive performance by the Q-Min algorithm on the cumulative objective would indicate that our alternative bottleneck objective is also a viable for formulating the task.
As shown by the numerical results, besides the oscillations in both learning curves (as DQN is known for unstable learning), the Q-Min agent and the Q-Sum agent learn to balance the pole at a similar pace throughout training. The close results between the two algorithms validate that the bottleneck objective is indeed a suitable alternative to the CartPole objective formulation.
§.§.§ Atari Breakout Game
To solve Atari with the proposed Q-Min algorithm, we utilize a simple reward scheme under Q-Min: we assign a negative reward of -1 to the agent each time it fails to catch the ball with the paddle, and set γ=0.98 in <ref> to encourage the agent to postpone such failure events. For learning with the Q-Sum algorithm, we follow the conventional incremental rewarding scheme originally built into the Atari game engine.
We present the learning progress of and in <ref>, with each agent's performance evaluated and averaged over 5 new game runs, after each 50 thousand update steps of training. Similar to the learning progress visualization for CartPole, we also use the conventional cumulative objective for the original Breakout game as the performance metric when plotting the learning curves of both algorithms.
Unlike in CartPole, the Q-Min agent shows a slightly slower learning progress and lower performance for Breakout. This is likely due to the Q-Min agent not learning the strictly optimal trajectories of redirecting the ball for hitting the most bricks, as its sole objective is to keep the ball in play. Nonetheless, even with simpler and more sparse rewards than the rewards used by Q-Sum, the Q-Min agent still manages to achieve relatively close performances to the Q-Sum agent, especially at the late training stage. The results illustrate the viability of interpreting Breakout with the bottleneck objective formulation.
We emphasize again that, specifically for these two classical problems, our goal is not to show that the proposed Q-Min algorithm is strictly superior to the conventional Q-Sum algorithm. After all, these two problems have long served as the canonical examples for the conventional reinforcement learning problems formulated with the cumulative objectives. Instead, we have shown that it is also valid to interpret and optimize these classical problems with the bottleneck objective formulation. Through learning with the proposed generalized Bellman update rule, the agent is capable of achieving the performances comparable with the results from the convention reinforcement learning approach as presented above. Essentially, When optimizing the agent under the bottleneck objective formulation, the agent learns an alternative game playing strategy for both CartPole and Breakout: to avoid or delay the failure event for as long as possible.
§.§ Single-Path Maximum-Flow Routing on Graph
We consider the directed graph network shown in <ref>, and perform the Q-Min algorithm with <ref> until convergence. With the MDP in this problem being finite, we set γ=1, which simplifies the numerical results with Q values precisely equal to the future bottleneck rates.
In Table <ref>, we present the iterations of both the Q-Min algorithm and the Q-Sum algorithm. In the first row of the table, we adopt the simplified notations for Q values: we use Q_n_i→ n_j to uniquely denote the state-action value function Q(s_n_i, a_(n_i,n_j)) in <ref>. We adopt synchronized iterations, where in each new iteration, the Q value in the right-hand-side of <ref> comes from the previous iteration. All the iterations of value function updates are shown until convergence.
For the Q-Min algorithm, it takes 4 iterations of the generalized Bellman updates to converge. From the resulted Q^*, we deduce the optimal policy π^* producing the following optimal flow route:
s→ b→ a→ d→ t .
This route obtained supports a flow rate of 5, which is indeed the global optimal flow rate.
On the other hand, for the Q-Sum algorithm, the convergence speed is lower than Q-Min, as it takes 5 iterations of the regular Bellman updates to converge. Furthermore, the deduced optimal policy results in the following flow route:
s→ b→ a→ c→ d→ t .
This route supports a flow rate of 4, which is sub-optimal and inferior to the route obtained by the Q-Min algorithm.
§.§ Wireless Ad hoc Network Routing and Spectrum Access
§.§.§ Experiment Settings
We simulate wireless ad hoc networks in a 1000m×1000m region with |𝒦|=3 data flows and |ℬ|=8 frequency bands. We adopt the same specifications as in <cit.> as we aim to compare results and illustrate the effectiveness of the proposed Q-Min algorithm. Specifically, we consider the short-range outdoor model ITU-1411 with a distance-dependent path-loss to model all wireless channels, over all frequency bands at 2.4GHz carrier frequency. Shadowing and fast-fading are not considered in the simulation setting. This corresponds to an outdoor environment (e.g., a rural or remote area), where the strengths of the wireless links are mostly functions of the distances between the transmitters and the receivers. We assume each of the |ℬ|=8 frequency bands has a 5MHz bandwidth for signal transmission. All antennas equipped at the transmission nodes have a height of 1.5m and 2.5dBi antenna gain. We assume a transmit power of 30dBm for all nodes and background noise at -130dBm/Hz.
To generate realistic wireless network layouts, the node locations are randomly generated with varying node densities over the region. Specifically, we divide the 1000m×1000m network region into nine equal sub-regions, and randomly locate (6, 8, 7, 6, 5, 10, 8, 9, 6) nodes within each of the nine sub-regions correspondingly.
§.§.§ Training Convergence Speed Comparison
We train each set of |𝒦|=3 agents with three algorithms: Q-Min, Q-Sum, and the algorithm by <cit.>:
* Q-MC: RL policy based on the value function obtained from the Monte-Carlo episodic estimations of future bottleneck rewards, computed at the end of episodes.
We generate 380,000 wireless ad hoc network layouts for training the agents under each algorithm, under the following training schedule:
* Initial 30,000 layouts are used for random routing on collecting initial experience.
* The middle 300,000 layouts are used for the ϵ-greedy policy based routing, with the ϵ value follows linear annealing from 1.0 to 0.1 throughout the training over these layouts.
* The final 50,000 layouts are used with ϵ=0 for the final convergence stage.
We use the Dueling-DQN architecture <cit.> to model all the agents, with the neural network specifications listed in <ref>, same as in <cit.>. Since the rewards as link rates are dense throughout the MDP, uniform sampling is sufficient for experience replay. We use c=10 as the number of neighbors the agent explores each time. The state inputs to the DQNs are therefore 40-component vectors, i.e. s∈ℛ^40,∀ s∈𝒮.
For the same reasons as in <ref>, we set γ=1 in <ref>. During training, we track both the mean-squared-errors for predictions on Q, and the routing performances over 100 newly generated network layouts at each update step of training. The training curves for all three algorithms are displayed in <ref>.
Shown by the learning curves, the conventional Q-Sum agents collectively achieve the worst learning progress and simply fail to convergence on the Q value estimations. While both the Q-Min agents and the Q-MC agents converge to comparable performances, the Q-Min agents enjoy a much faster convergence speed. This illustrates the advantage of the temporal difference learning over the Monte-Carlo method, which is made possible for non-cumulative objectives with the proposed generalized update rules.
To better understand why Q-Min achieves noticeably faster convergence than Q-MC, we emphasize that the Monte-Carlo estimations used by Q-MC are highly affected by the random explorations especially at the early stage of training. Certain random explorations might lead to an extremely low bottleneck rate for the newly established route. This bottleneck rate is then used as the Monte-Carlo estimation on the value function during training the Q-MC agent. Thus, the value estimations learned by the Q-MC agents suffer from low qualities significantly at the beginning of training.
On the other hand, with the proposed generalized update operation, the bottleneck objective can be estimated by the temporal difference learning technique as in the Q-Min algorithm. As an off-policy[An off-policy algorithm separates the policy that the value estimation is based on from the sampling policy, which is desired when the sampling policy is highly noisy (e.g. with many random explorations).] learning algorithm, the temporal difference learning estimations are much more resilient to the random explorations, since the estimation target is obtained through one-step bootstrapping on the already learned value function. Therefore, it is not a surprise to see the significant improvements on the training efficiency and convergence speed by the Q-Min agents.
§.§.§ Performances on Bottleneck Rates
We present in <ref> test results on the number of links established on each flow, as well as the achieved bottleneck flow rates for these data flows, over 1000 newly generated testing wireless ad hoc networks. Furthermore, for each method, we collect the bottleneck rates of all data flows over all testing wireless networks, and present the cumulative distribution function (CDF) of these bottleneck rates in <ref>. As shown by both the statistics and the distributions of the flow rates, the Q-Min agents achieve the best routing results, whereas the Q-Sum agents perform the worst by a large margin, while having much higher numbers of links over the established data flows.
We visualize the optimized routes by each RL algorithm over a random wireless ad hoc network in <ref>. The Q-Min agents learn to establish links with medium lengths. This policy ensures a certain level of channel strength for the bottleneck links, without constructing too many links to avoid excessive interference which is detrimental to the bottleneck link rates. Furthermore, the Q-Min agents also learn to spatially spread out data flows as well as the frequency bands used among links for effective interference mitigation.
On the other hand, the Q-Sum agents learn the policy that connects unnecessarily many short links to form routes, neglecting the importance of the bottleneck link within each flow. Evidently, the conventional reinforcement learning formulation is unsuitable for solving such routing problems. For this reason, in application fields such as network communications, generalizing the objective function and its learning rule through our proposed approach is an essential optimization technique.
§ CONCLUSION
This paper recognizes the possibilities of formulating optimal control or reinforcement learning objectives as non-cumulative functions over rewards, and generalizes existing algorithms to optimizing such objectives. Specifically, we explore the generalized operations in the Bellman update rule, for which we provide the global convergence conditions with mathematical proofs. We also recognize the assumptions required on the MDP state transitions and reward functions for ensuring the global optimality on the obtained policies. With the generalized objectives and learning algorithms, we are able to unveil alternative strategies to classical optimal control or reinforcement learning problems, and more importantly, realize the possibilities for solving new problems with intrinsically non-cumulative objectives, which are frequently encountered in the fields such as network communications. This opens up directions for a broader range of applications for optimal control and reinforcement learning techniques.
§ PROOF OF <REF>
If g(·,·) satisfies the condition <ref> in <ref>, then the generalized value function update F^* as in <ref> is a contraction mapping.
In the following mathematical expressions, for the simplicity of notations, we use p(r_t,s_t+1|s_t,a_t) as the shorthand notation for the joint distribution of p_R_t|S_t,A_t(r_t|s_t,a_t) and p_S_t+1|S_t,A_t(s_t+1|s_t,a_t). We also assume p(r_t,s_t+1|s_t,a_t) is a discrete distribution. For continuous distribution, the proof still holds with summations substituted by integrations when computing the expectations.
For any pair of value functions ∀ Q^1, Q^2 ∈ℛ^|𝒱|, we have:
‖ F ^*Q^1-F^*Q^2‖_∞
= max_s_t,a_t| (F^*Q^1)(s_t,a_t)-(F^*Q^2)(s_t,a_t)|
= max_s_t,a_t|∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)g(r_t, γmax_a_t+1Q^1(s_t+1, a_t+1))
-∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)g(r_t, γmax_a_t+1Q^2(s_t+1, a_t+1))|
= max_s_t,a_t|∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)[g(r_t, γmax_a_t+1Q^1(s_t+1, a_t+1))-g(r_t, γmax_a_t+1Q^2(s_t+1, a_t+1))]|
≤ max_s_t,a_t∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)| g(r_t, γmax_a_t+1Q^1(s_t+1, a_t+1)) -g(r_t, γmax_a_t+1Q^2(s_t+1, a_t+1))|
≤ max_s_t,a_t∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)|γmax_a_t+1Q^1(s_t+1, a_t+1) -γmax_a_t+1Q^2(s_t+1, a_t+1)|
≤ γmax_s_t,a_t∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)max_a_t+1| Q^1(s_t+1, a_t+1) -Q^2(s_t+1, a_t+1)|
≤ γmax_s_t,a_t∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)‖ Q^1-Q^2‖_∞
≤ γ‖ Q^1-Q^2‖_∞ ,
‖ F ^*Q^1-F^*Q^2‖_∞
= max_s_t,a_t| (F^*Q^1)(s_t,a_t)-(F^*Q^2)(s_t,a_t)|
= max_s_t,a_t|∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)g(r_t, γmax_a_t+1Q^1(s_t+1, a_t+1))
-∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)g(r_t, γmax_a_t+1Q^2(s_t+1, a_t+1))|
= max_s_t,a_t|∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)[g(r_t, γmax_a_t+1Q^1(s_t+1, a_t+1))
-g(r_t, γmax_a_t+1Q^2(s_t+1, a_t+1))]|
≤ max_s_t,a_t∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)| g(r_t, γmax_a_t+1Q^1(s_t+1, a_t+1))
-g(r_t, γmax_a_t+1Q^2(s_t+1, a_t+1))|
≤ max_s_t,a_t∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)|γmax_a_t+1Q^1(s_t+1, a_t+1)
-γmax_a_t+1Q^2(s_t+1, a_t+1)|
≤ γmax_s_t,a_t∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)max_a_t+1| Q^1(s_t+1, a_t+1)
-Q^2(s_t+1, a_t+1)|
≤ γmax_s_t,a_t∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)‖ Q^1-Q^2‖_∞
≤ γ‖ Q^1-Q^2‖_∞ ,
where (<ref>) follows from the condition <ref> in <ref>; (<ref>) follows from the fact that for any two functions f_1 and f_2, we have |sup_x f_1(x)-sup_x f_2(x)|≤sup_x|f_1(x)-f_2(x)|; and lastly, (<ref>) follows from the normalization of the probability distribution p(r_t,s_t+1|s_t,a_t).
With Lemma <ref> established, we can readily prove the main theorem of the global convergence of the value function updates.
Starting from any arbitrary value function initialization point Q^0∈ℛ^|𝒱|, consider the value iteration process of iteratively applying the mapping F^*. With ℛ^|𝒱| being a Banach space and F^* being a contraction mapping (by Lemma <ref>), according to the Banach's fixed-point theorem <cit.>, the process is guaranteed to converge to a unique convergence point Q^*.
§ PROOF OF <REF>
If g(·,·) satisfies the condition <ref> in <ref>, then the generalized value function update F^* as in <ref> is monotonic, i.e., ∀ Q^1, Q^2∈ℛ^|𝒱|, if Q^1≥ Q^2, then F^*Q^1≥ F^*Q^2 always holds[The notation Q^1≥ Q^2 implies Q^1(s,a)≥ Q^2(s,a), ∀ s, a].
Q^1≥ Q^2
Q^1(s,a) ≥ Q^2(s,a), ∀ s,a
max_a Q^1(s,a) ≥max_a Q^2(s,a), ∀ s
g(r, γmax_a Q^1(s,a))≥ g(r, γmax_a Q^2(s,a)), ∀ s,r
where <ref> follows from the condition <ref> in <ref>.
Now we introduce the time step into the equations, and consider any given state and action pair at time t: s_t and a_t. Let s=s_t+1 in <ref> be the state the agent is in after executing a_t on s_t; and let r=r_t be the reward from executing a_t on s_t. We then have:
Q^1≥ Q^2
g(r_t, γmax_a Q^1(s_t+1,a))≥ g(r_t, γmax_a Q^2(s_t+1,a)), ∀ r_t, s_t+1
(F^*Q^1)(s_t,a_t) ≥ (F^*Q^2)(s_t,a_t), ∀ s_t, a_t
F^*Q^1 ≥ F^*Q^2
Q^1≥ Q^2
g(r_t, γmax_a Q^1(s_t+1,a))≥ g(r_t, γmax_a Q^2(s_t+1,a)),
∀ r_t, s_t+1
(F^*Q^1)(s_t,a_t) ≥ (F^*Q^2)(s_t,a_t), ∀ s_t, a_t
F^*Q^1 ≥ F^*Q^2
To show that Q^* is the optimal point, let π_0 be an arbitrary initial policy and Q^0∈ℛ^|𝒱| be the corresponding value function (therefore Q^0=F^π_0Q^0). we have:
Q^0 = F^π_0Q^0 ≤ F^*Q^0
where the inequality follows from the maximization over the actions by the greedy policy mapping F^*. Applying F^* again on both sides of the inequality, and by the monotonicity from Lemma <ref> we have:
Q^0 ≤ F^*Q^0 ≤F^*^2Q^0
where F^2 denotes iteratively applying the mapping F twice. After iteratively applying F^* until convergence, we arrive at a chain of inequalities ending with the unique convergence point:
Q^0 ≤ F^*Q^0 ≤F^*^2Q^0 ≤F^*^3Q^0 ≤…≤lim_n→∞F^*^nQ^*= Q^*
Since Q^0 is an arbitrary value function, we have shown that the unique point of convergence Q^* is indeed the global maximum value function.
Furthermore, given the assumptions that p(r_t|s_t,a_t) and p(s_t+1|s_t,a_t) are deterministic as required by <ref>, we have g(·,·) being exchangeable with 𝔼_p(r_t|s_t,a_t)[·] and 𝔼_p(s_t+1|s_t,a_t)[·]. By bringing the expectations inside the operation g(·,·), we can see that <ref> and <ref> are equivalent. Correspondingly, the converged point of the value iteration is truly the expectation value of the objective function as defined in <ref>. Therefore, the greedy policy π^* derived from the value function Q^* is truly the global optimal policy.
§ PROOF OF CONVERGENCE PROPERTIES ON THE BOTTLENECK UPDATE OPERATION
To show that the bottleneck update operation <ref> satisfies the condition by <ref> in <ref>:
Without loss of generality, we assume b≥ c, then we have:
* If c≤ b≤ a, then:
|g(a,b)-g(a,c)|=|min(a,b)-min(a,c)|=|b-c|.
* If c≤ a<b, then:
|g(a,b)-g(a,c)|= |min(a,b)-min(a,c)|
= a-c
< |b-c|.
* If a<c≤ b, then:
|g(a,b)-g(a,c)|= |min(a,b)-min(a,c)|
= 0
< |b-c|.
To show that the bottleneck operation <ref> satisfies the condition by <ref> in <ref>:
Given b≥ c, we have:
* If a≤ c, then:
g(a,b) = min(a,b)=a=min(a,c) = g(a,c).
* If c< a≤ b, then:
g(a,b) = min(a,b)=a> c=min(a,c) = g(a,c).
* If a>b, then:
g(a,b) = min(a,b)=b≥ c=min(a,c) = g(a,c).
§ PROOF OF CONVERGENCE PROPERTIES ON THE HARMONIC MEAN UPDATE OPERATION
To show that the harmonic mean operation <ref> satisfies the condition by <ref> in <ref>:
With our assumption of positive rewards, we have:
|g(a,b)-g(a,c)| = |1/1/a+1/b-1/1/a+1/c|
= |b-c|/(1/a+1/b)(1/a+1/c)bc
≤ |b-c|/(1/b)(1/c)bc
= |b-c|
To show that the harmonic mean operation <ref> satisfies the condition by <ref> in <ref>:
Given b≥ c. With our assumption of positive rewards, we have:
g(a,b)-g(a,c) = 1/1/a+1/b-1/1/a+1/c
= b-c/(1/a+1/b)(1/a+1/c)bc
≥ 0
IEEEtran
[
< g r a p h i c s >
]
Wei Cui (S'17) received the B.A.Sc. degree in Engineering Science, the M.A.Sc. degree in Electrical and Computer Engineering, and the Ph.D. degree in Electrical and Computer Engineering from University of Toronto, Toronto, ON, Canada, in 2017, 2019, and 2023, respectively.
His research interests include machine learning, optimization, and wireless communication.
[
< g r a p h i c s >
]
Wei Yu (Fellow, IEEE) received the B.A.Sc. degree in computer engineering and mathematics from the University of Waterloo, Waterloo, ON, Canada, and the M.S. and Ph.D. degrees in electrical engineering from Stanford University, Stanford, CA, USA. He is now a Professor in the Electrical and Computer Engineering Department at the University of Toronto, Toronto, ON, Canada, where he holds a Canada Research Chair (Tier 1) in Information Theory and Wireless Communications. He is a Fellow of the Canadian Academy of Engineering and a member of the College of New Scholars, Artists, and Scientists of the Royal Society of Canada. Prof. Wei Yu was the President of the IEEE Information Theory Society in 2021, and has served on its Board of Governors since 2015. He served as the Chair of the Signal Processing for Communications and Networking Technical Committee of the IEEE Signal Processing Society from 2017 to 2018. He was an IEEE Communications Society Distinguished Lecturer from 2015 to 2016. He served as an Area Editor of the IEEE Transactions on Wireless Communications, as an Associate Editor for IEEE Transactions on Information Theory, and as an Editor for the IEEE Transactions on Communications and IEEE Transactions on Wireless Communications. Prof. Wei Yu received the Steacie Memorial Fellowship in 2015, the IEEE Marconi Prize Paper Award in Wireless Communications in 2019, the IEEE Communications Society Award for Advances in Communication in 2019, the IEEE Signal Processing Society Best Paper Award in 2008, 2017 and 2021, the Journal of Communications and Networks Best Paper Award in 2017, and the IEEE Communications Society Best Tutorial Paper Award in 2015.
|
http://arxiv.org/abs/2307.07588v1 | 20230714192659 | A large 'Active Magnetic Shield' for a high-precision experiment | [
"C. Abel",
"N. J. Ayres",
"G. Ban",
"G. Bison",
"K. Bodek",
"V. Bondar",
"T. Bouillaud",
"E. Chanel",
"J. Chen",
"W. Chen",
"P. -J. Chiu",
"C. B. Crawford",
"M. Daum",
"C. B. Doorenbos",
"S. Emmenegger",
"L. Ferraris-Bouchez",
"M. Fertl",
"A. Fratangelo",
"W. C. Griffith",
"Z. D. Grujic",
"P. Harris",
"K. Kirch",
"V. Kletzl",
"P. A. Koss",
"J. Krempel",
"B. Lauss",
"T. Lefort",
"P. Mullan",
"O. Naviliat-Cuncic",
"D. Pais",
"F. M. Piegsa",
"G. Pignol",
"M. Rawlik",
"I. Rienäcker",
"D. Ries",
"S. Roccia",
"D. Rozpedzik",
"W. Saenz-Arevalo",
"P. Schmidt-Wellenburg",
"A. Schnabel",
"E. P. Segarra",
"N. Severijns",
"T. Shelton",
"K. Svirina",
"R. Tavakoli Dinani",
"J. Thorne",
"R. Virot",
"N. Yazdandoost",
"J. Zejma",
"N. Ziehl",
"G. Zsigmond"
] | physics.ins-det | [
"physics.ins-det",
"hep-ex",
"nucl-ex"
] |
nEDM collaboration
University of Sussex, Department of Physics and Astronomy,Falmer, Brighton BN1 9QH, UK
ETH Zürich, Institute for Particle Physics and Astrophysics, CH-8093 Zürich, Switzerland
Normandie Univ, ENSICAEN, UNICAEN, CNRS/IN2P3, LPC Caen, 14000 Caen, France
Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland
Marian Smoluchowski Institute of Physics, Jagiellonian University, 30-348 Cracow, Poland
LPSC, Université Grenoble Alpes, CNRS/IN2P3, Grenoble, France
University of Bern, Albert Einstein Center for Fundamental Physics, CH-3012 Bern, Switzerland
University of Kentucky, Lexington, USA
Institute of Physics, Johannes Gutenberg University, D-55128 Mainz, Germany
Institute of Physics Belgrade, University of Belgrade, 11080 Belgrade, Serbia
Physikalisch Technische Bundesanstalt, Berlin, Germany
Institute for Nuclear and Radiation Physics, KU Leuven, B-3001 Leuven, Belgium
Department of Chemistry - TRIGA site, Johannes Gutenberg University, 55128 Mainz, Germany
e8Corresponding author: [email protected]
e9Corresponding author: [email protected]
e10Corresponding author: [email protected]
e7Present address: Institut Laue Langevin, 71 avenue des Martyrs CS 20156, 38042 GRENOBLE Cedex 9, France
e6Present address: University of Zurich,
8057 Zurich, Switzerland
e5Present address: Hochschule Luzern, 6002 Luzern, Switzerland
e4Present address: Fraunhofer Institute for Physical Measurement Techniques, 79110 Freiburg, Germany
e3Present address: Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland
A large `Active Magnetic Shield' for a high-precision experiment
C. AbelSussex
N. J. AyresETH
G. BanCAEN
G. BisonPSI
K. BodekCracow
V. BondarETH,e8
T. BouillaudLPSC
E. ChanelBern,e7
J. ChenCAEN
W. ChenETH,PSI
P.-J. ChiuETH,PSI,e6
C. B. CrawfordKentucky
M. DaumPSI
C. B. DoorenbosETH,PSI
S. EmmeneggerETH,e5
L. Ferraris-BouchezLPSC
M. FertlMainz
A. FratangeloBern
W. C. GriffithSussex
Z. D. GrujicSerbia
P. HarrisSussex
K. KirchETH,PSI,e9
V. KletzlETH,PSI
P. A. KossLeuven,e4
J. KrempelETH,e10
B. LaussPSI
T. LefortCAEN
P. MullanETH
O. Naviliat-CuncicCAEN
D. PaisETH,PSI
F. M. PiegsaBern
G. PignolLPSC
M. RawlikETH,e3
I. RienäckerPSI
D. RiesPSI
S. RocciaLPSC
D. RozpedzikCracow
W. Saenz-ArevaloCAEN
P. Schmidt-WellenburgPSI
A. SchnabelPTB
E. P. SegarraPSI
N. SeverijnsLeuven
T. SheltonKentucky
K. SvirinaLPSC
R. Tavakoli DinaniLeuven
J. ThorneBern
R. VirotLPSC
N. YazdandoostMainz2
J. ZejmaCracow
N. ZiehlETH
G. ZsigmondPSI
Received ; accepted
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We present a novel Active Magnetic Shield (AMS), designed and implemented for the n2EDM experiment at the Paul Scherrer Institute. The experiment will perform a high-sensitivity search for the electric dipole moment of the neutron. Magnetic-field stability and control is of key importance for n2EDM. A large, cubic, 5 m side length, magnetically shielded room (MSR) provides a passive, quasi-static shielding-factor of about 10^5 for its inner sensitive volume. The AMS consists of a system of eight complex, feedback-controlled compensation coils constructed on an irregular grid spanned on a volume of less than 1000 m^3 around the MSR. The AMS is designed to provide a stable and uniform magnetic-field environment around the MSR, while being reasonably compact. The system can compensate static and variable magnetic fields up to ±50 (homogeneous components) and ±5/ (first-order gradients), suppressing them to a few in the sub-Hertz frequency range.
The presented design concept and implementation of the AMS fulfills the requirements of the n2EDM experiment and can be useful for other applications, where magnetically silent environments are important and spatial constraints inhibit simpler geometrical solutions.
§ INTRODUCTION
High-precision measurements in fundamental physics, using particles, nuclei, atomic, or molecular systems, require exquisite temporal stability and spatial uniformity of many environmental parameters to control systematic effects and fully exploit their statistical sensitivity. The control of the magnetic field is of particular importance in those experiments sensitive to the coupling of the magnetic field to the spin of the system through its magnetic moment. For example, experiments searching for permanent or variable electric dipole moments (EDMs), signals of dark matter fields, neutron-antineutron and mirror-neutron oscillations, Lorentz invariance violation, or new forces <cit.>.
Most of them deploy dedicated coil systems generating uniform magnetic fields inside magnetically shielded volumes. Shielding of these volumes can be achieved by means of passive or active magnetic shielding (AMS), separately, or, in combination. Passive shields are built from high-permeability materials and rely on their magnetic properties. Active magnetic shields are based on feedback-controlled coils, where magnetic sensors detect changes of the magnetic field, and an algorithm calculates the proper response to adjust the coil currents and counteract the perturbation.
Since the 1980s, numerous active shields have been built for different applications <cit.>, covering a wide range of research areas such as ion beams, electron microscopes, and bio-medical applications, as well as high-precision measurements of EDMs <cit.>.
In particular, an active magnetic shield was successfully used for the first time by our collaboration in the nEDM experiment, which provides the current best measurement of the neutron EDM <cit.>. The system consisted of six actively-controlled rectangular coils with size of approximately 8 m × 6 m, located in a Helmholtz-like positioning. The coils were built around a control volume of 2.5 m × 2.5 m × 3 m. The system was crucial to fully exploit the statistical sensitivity of the experiment <cit.>.
In this paper, we report on the design-path, implementation, and initial performance characterization of a dedicated AMS for the n2EDM experiment <cit.>, currently undergoing commissioning at the ultracold neutron (UCN) source <cit.> at the Paul Scherrer Institute (PSI).
A tenfold improvement in statistical sensitivity of n2EDM over nEDM will be realized by many innovations, primarily by improved adaption to the UCN source and two enlarged vertically-stacked UCN storage-chambers. The target systematic error budget yields stringent requirements for the magnetic-field stability and uniformity, and, thus, advanced shielding from magnetic-field disturbances. The n2EDM experiment uses a combination of passive and active shieldings around the sensitive volume.
The passive shielding is provided by a Magnetically Shielded Room (MSR) <cit.> with a base size of 5.2 m x 5.2 m and a height of 4.8 m. It is composed of five mu-metal layers, one ULTRAVAC layer, and one intermittent RF-shielding layer with a shielding factor of 10^5 at 0.01 and rising with frequency to 10^8 at 1, as shown in Fig. <ref>.
The specifications for the n2EDM internal magnetic field are discussed in detail in Ref. <cit.>. A crucial requirement is that the average magnitudes of magnetic fields in the two UCN storage chambers (vertically separated by 18 cm) must not differ by more than 10 pT, which corresponds to a vertical magnetic-field gradient smaller than 0.6 pT/cm. This limits the systematic effects coming from the different resonance conditions in the two chambers.
Due to the quasi-static shielding factor of the MSR of 10^5, slow external field changes of order 1 will lead to internal field changes of order 10 pT. As the shell structure of the MSR and its high-quality, innermost layer tend to homogenize magnetic field changes, internal magnetic-field gradients resulting from external gradients are further suppressed <cit.>. Thus, an external, inhomogeneous variation of a few around the MSR can be tolerated. However, larger field variations on the outside of the MSR could cause larger-than-allowed internal gradients. In addition, larger external field changes can change the magnetization of the outermost mu-metal layer of the MSR, which will, in turn, slowly propagate through the MSR layers, and result in undesirable drifts of the inner magnetic field.
The task for the AMS in n2EDM is thus to provide a magnetic field around the MSR that is stable to within a few , even with sub-Hertz external variations, in order to meet the 10 conditions on the inside. For large, slow magnetic field variations, of order ten or several tens of , this also corresponds to
an improved overall shielding performance in the low-frequency regime, see Fig. <ref>.
This paper describes the design and implementation of the AMS system for the n2EDM experiment (see <cit.>)
and is organized as follows:
(i) The magnetic fields over the complete volume occupied by the entire experimental apparatus were mapped before setting up the n2EDM experiment and are described in Sec. <ref>. The disturbance of the field resulting from neighbouring magnetic instruments was evaluated. All relevant fields could be reproducibly measured and described to precision by superpositions of homogeneous (three directions) and first-order gradient (five independent components) magnetic-field contributions. This established the need for eight independent and ideally `orthogonal' coils for the field compensation system.
(ii) A method was developed to design optimal coils for specific magnetic fields when constraining the current-carrying wires to a predetermined, irregular grid on a surface around the volume of interest <cit.>, described in Sec. <ref>.
(iii) A scaled-down prototype was developed and served as a proof-of-concept system, see Sec. <ref>. It allowed tests of various design options, including an irregular geometry, the powering of the coils, and the implementation of feedback sensors and appropriate algorithms, with and without mu-metal.
(iv) A scheme to systematically simplify the individual, optimal, full-scale coils was developed to ease practical construction of the AMS without sacrificing the specified performances <cit.> (Sec. <ref>). This included reducing windings in the eight coils and their efficient powering with eight current sources, each feeding three circuits.
(v) The system was constructed with careful quality control during assembly of the system with more than 55 km of cabling, as described in Sec. <ref>.
(vi) Current sources were developed and implemented (Sec. <ref>). An array of three-axis fluxgate sensors was implemented to monitor the magnetic field and inform the feedback algorithm (Sec. <ref>).
(vii) The commissioning of the full AMS system was successfully completed with various performance studies, as described in Sec. <ref>.
The AMS design was based on an earlier-developed method <cit.> and driven by the sensitivity requirements in the n2EDM spectrometer taking into consideration spatial constraints of the experimental setup.
In the following chapters we present the main concept of the AMS design, a small-scale proof-of-principle prototype and the implementation of the AMS system into the n2EDM experiment.
§ MAPPING OF THE EXPERIMENTAL AREA
The n2EDM experiment is located at PSI in UCN area South. Before setting up n2EDM, its predecessor nEDM was disassembled and the area cleared. Figure <ref> shows a view of the empty experimental area. The concrete blocks are part of the biological shield of the UCN source (to the left) and of the medical cyclotron COMET (forward direction and to the right). These blocks cannot be moved, and thus ultimately limit the space available for the n2EDM experiment. The MSR was decoupled from the rest of the hall on its own foundation, which is seen in the picture as brown floor, indicating approximately the size of the MSR base. Given the size of the MSR and the spatial constraints of the biological shields, the coils of the AMS system have to be as close as around 1 m to the MSR, and still providing the desired homogeneous field in the volume of interest. This immediately excludes AMS field generation with simple Helmholtz-like coil systems.
Before designing the AMS system, the magnetic field of the experimental area was extensively mapped <cit.> to determine the components of the magnetic field that have to be compensated. The measured field was then decomposed, by a least-squares fit, into zeroth order, homogeneous fields, first-order gradients, and higher-order contributions, obtaining field maps and interpolated continuous fields.
Figure <ref> shows the mobile `mapping tower' in action. The tower was constructed using up to five identical, 2 m-long aluminum triangle-truss segments from commercially available event stage equipment. Each segment carried three 3-axis fluxgate sensors. The segments were vertically stacked and mounted on a heavy aluminum base plate on wheels. The position and orientation of this cart in the area was measured by three string potentiometers attached to a rigid coordinate system referenced to the area. The full area could thus be magnetically mapped within minutes, with spatial resolution limited by the reproducibility of the order of 0.1 m.
Several strong superconducting magnets at 10 m to 50 m distances contribute with fields in the tens of range and field gradients of a few /m. Their influence is particularly severe as their fields can change during n2EDM measurements without prior notice. Typical time scale of such uncontrolled changes can vary from minutes to tens of minutes, possibly several times a day. So that the AMS can compensate those changes, for each of the known strong nearby magnets, the mapping was performed with it on and off. Thereby, the change of the magnetic field in the space of the n2EDM experiment was measured.
The superconducting magnet shown in Fig. <ref> is less problematic, as it is self-shielded with a known steep field gradient and controlled by the n2EDM experiment operation. During n2EDM operation it is always ramped up and running very stably in a persistent mode.
The reproducibility of maps taken under similar conditions was of the order of a few , where the limitation might be due to drifts of the fields themselves or uncertainties of the measurement and the analysis procedures. Importantly, it was found that the measured fields could be sufficiently described, with 1-accuracy, with only homogeneous and first-order gradient fields.
We concluded that the AMS needed only coils to compensate homogeneous fields in the three independent spatial directions, and five independent first-order gradients. Therefore, a system could be designed using only eight independent coils.
Concerning field strengths, it was found that a range of ±50 for the three homogeneous components of the field, and up to ±5/ for the five first-order gradients would be sufficient to meet our requirements. These values already include a safety margin of 20%.
§ THE CONCEPT OF THE AMS DESIGN
§.§ Working principle
In a volume with no magnetised parts, any magnetic field can be generated by the correct current distribution on the surface of this volume. The currents on the surface can thus be chosen to exactly counteract the effect of any external field, stabilising the field inside. In a practical realisation, there is a finite number of coils on the surface, and the field is measured only in a finite number of points. Figure <ref> depicts a simple realisation with a single coil (shown in yellow) and eight 3-axis sensors (shown in green).
Here, an external magnetic field B_𝐞 influences the target volume, in which the field is to be stabilised (depicted in blue, occupied by the MSR of n2EDM). We aim, however, for optimal (in the least-squares sense) stabilization at the eight green points where three-axis sensors are placed. Their readings B_𝐦 are used to calculate currents I feeding a coil system to counteract external field changes. Obviously, in the real application, the coils are much more complicated than shown. In principle one can aim at any target field at the surface of the sensitive volume. In our application aiming at zero field is most reasonable.
In the absence of the MSR and other magnetization, one obtains a linear dependence between B_𝐦 and the coil currents.
In fact, we initially assume, and later prove experimentally in section <ref>, that a linear dependence also holds for a demagnetized MSR exposed only to small magnetic fields. One can write:
B_𝐦 =B_𝐞+M I.
The matrix M contains the proportionality factors, which relate the current in the AMS coils to the magnetic fields measured at the sensor positions. For a built system, the entries of M can be measured using the installed coils and sensors (see Sec. <ref>). During design of the coil itself, they can be calculated, without the MSR using Biot-Savart's law, and with the MSR, using a sufficiently realistic finite-element simulation of the full system. Equation <ref> can be written components-wise in the following way:
[ B_m,1x; B_m,1y; B_m,1z; ⋮; B_m,nz ] =
[ B_e,1x; B_e,1y; B_e,1z; ⋮; B_e,nz ]
+
[ M_ 11x M_ 21x … M_ k1x; M_ 11y M_ 21y … M_ k1y; M_ 11z M_ 21z … M_ k1z; ⋮ ⋮ ⋱ ⋮; M_ 1nz M_ 2nz … M_ knz; ][ I_ 1; I_ 2; ⋮; I_ k ].
Matrix M has dimensions 3 n × k, where k is the number of coils (for the AMS system k=8, see Sec. <ref>), and n is the number of magnetic-field sensors.
Next, one implements an iterative process using the measured changes in B_𝐦 (see Sec. <ref>) to calculate appropriate changes for I to zero B_𝐦 again.
In the applied feedback algorithm the pseudo-inverse of M is used.
In order to calculate the pseudo-inverse we start with a Singular Value Decomposition:
M=USV^T,
where U and V are unitary matrices and S is diagonal. The latter is called the spectrum and describes the effect of combinations of coils on the magnetic sensors. The pseudo-inverse M^-1 can be calculated as:
M^-1=VS^-1U^T.
The ratio of extreme values of the spectrum, s_min and s_max, defines a condition number C of the matrix M:
C=s_max/s_min.
The condition number is an important characteristic of the system design of the feedback-matrix quality. It represents the sensitivity of the sensors to current changes. A low condition number means that there are particular combinations of currents I that have small influence on the readings of the sensors measuring B_𝐦. When inverted, this leads to small changes in the sensors to cause large changes in the currents, rendering the system to be unstable.
The condition number is later used in the optimization of the sensor positions in the AMS system (Sec. <ref>).
§.§ Design challenges
In a volume with no magnetized parts, any magnetic field can be generated by the correct current distribution on the surface of this volume. In particular, the currents on the surface can be chosen to exactly counteract the effect of any external field, making the inner magnetic field zero. The MSR can be demagnetized and reside in the zero field inside the volume with exactly the same currents as needed for the empty volume without the MSR.
In the real experiment, currents cannot be arbitrarily distributed on a surface. They must follow predefined, discrete paths and the fields can only be adjusted by varying current values. The spatial discretization is a grid to which the current carrying wires are fixed.
The discretized current distribution can approximate the target field well, if the discretization in small compared to the distance between the target volume and the surface.
Several constraints for the AMS system were already mentioned. The coil system must be large compared to the size of the MSR, however, the walls of the experimental area ultimately limit the size of the surface to which the AMS could be mounted. In addition, the experimental area must be accessible. It should be possible for persons with reasonably sized equipment to enter the experiment without breaking the currents in the AMS. It should also be possible to open the AMS from the top to insert large equipment with the crane. Various other installations penetrate surfaces around the MSR such that the grid for the AMS must be adapted to the needs of other subsystems.
The shielding blocks seen in Fig. <ref> as well as the regular floor of the experimental hall are made of steel-reinforced concrete with some magnetic response. The latter was investigated and fortunately found to be rather weak and finally negligible if a minimal distance to the walls is maintained.
The design goal of the AMS system was to compensate environmental fields, so that the field around the MSR outside wall is stable and close to zero <cit.>.
Fig. <ref> schematically displays the volume of interest for the n2EDM AMS system as given by the MSR.
As an example, depicted yellow coils could aim to compensate the external field. The size of the coils is limited by the surrounding experimental infrastructure.
The minimal distance between the compensating coil and the MSR walls does not exceed 1.5m.
Due to these limitations, we used the method <cit.> to develop more complicated AMS coils, to produce homogeneous field as required in the volume of interest.
This method allowed to design coils with wire paths along a predetermined grid that is mounted on the inside of the thermal shell around the MSR. In the next section we briefly introduce the basic principles of the method.
§.§ Method of simple coil design and its application for the AMS system
The aforementioned requirements and constraints of the system called for the development of a flexible method to design coils that could be practically built.
The employed method of coil design <cit.> is based on three key inputs:
(i) target fields, which have to be compensated by the coils;
(ii) a fixed grid where coil wires can be placed; and
(iii) points of interest (POIs) of target fields, covering the fiducial volume of interest densely enough (Fig. <ref>, left).
The grid can be subdivided into many small coils called tiles. A tile is the smallest building block of the grid. The smaller this elementary building block is, the more homogeneous the field can be. For practical reasons, we have chosen to make most tiles rectangular; however, the method could deal with any shape.
It is also worth reiterating that the method does not require the grid to be regular. In fact, the AMS system of the n2EDM experiment (Sec. <ref>) is implemented on an irregular grid.
Once target fields, grid, and POIs are defined, the magnetic field at the POIs,
B_𝐏𝐎𝐈, created by the currents I in the tiles, can be described similarly to Eq. <ref> by
B_𝐏𝐎𝐈 = M_𝐃 I.
In this design phase, each element of proportionality matrix M_𝐃 can now be calculated numerically using Biot-Savart's law.
Using a least-squares method we find the current needed in each tile to approximate the target field.
For the AMS system, there were 308 tiles in total (see Sec. <ref>).
The calculation is simplified by cancelling counteracting currents on the grid structure.
The algorithm described in <cit.> decomposes this grid of currents into simple loops, which are closed current paths that can be wound on the grid.
The result of this step is a set of such loops, that each need to be powered with a specific design current to generate the target field.
Examples of such simple loops are depicted in different colors in the right panel of Fig. <ref>.
In order to change the magnitude of the field generated by a system of loops, all loop currents in the system need to change proportionally to the design current.
Thus, the set of loops for a specific target field can be connected in series, creating one coil.
The number of windings for each loop can be adjusted, such that the coil could be powered by one current source.
However, for the AMS system it was decided to split each coil into three electrical circuits: with large, medium and small elementary currents (which will still be changed with the same proportionality and are integer multiples of the smallest current).
For example, choosing elementary currents as [15A, 5A, 1A], a loop with a current of 73 would be wound as follows:
73=4×15+2×5 + 3×1.
The choice of the smallest elementary current leads to some imperfection, here on the order of 0.5/73 ≈ 0.7% or
0.4 for 50, within the requirements of the system.
This approach allowed minimization of winding efforts and self-inductance, while keeping the number of current source channels reasonable. In our case, we constructed eight independent coils, each with three circuits for the elementary currents. They are operated by eight current sources, each with three channels for the different currents.
The described method of coil design for an AMS offers advantages over approaches using simple geometric coils, namely in two areas. 1) The size of the coil system can be decreased relative
to the size of the sensitive volume. A loss of performance, e.g., in the
homogeneity of a given volume, can always be counteracted by choosing a
denser grid.
2) The method allows construction of a coil for any field and the grid geometry that can be chosen almost arbitrarily. In particular, we have chosen to construct coils that produce orthogonal fields. The magnetic field is described as a superposition of orthogonal, cartesian harmonic polynomials:
𝐁(𝐫) = ∑_n=1^n_max H_n 𝐏_n(𝐫).
Here H_n are expansion coefficients and 𝐏_n(𝐫)=(P_nx,P_ny,P_nz) are polynomials as defined in Table <ref> for the three homogeneous
and five first-order gradient
fields.
Terms of higher n correspond to
higher-order gradients. The advantages of this decomposition are that the polynomials are orthogonal and each basis state satisfies Maxwell's equations. This method was initially considered by G. Wyszynski <cit.>. We used this approach to define eight AMS coils: three coils to compensate homogeneous fields, and five to compensate linear magnetic field gradients.
§ THE AMS PROTOTYPE
Before applying the simple-coil method to design and construct the AMS system for n2EDM, we studied a smaller-scale prototype <cit.> at ETH Zurich (Fig. <ref>).
§.§ The prototype design and construction
The prototype consisted of the eight types of coils as intended for the AMS system.
Similar in specifications, the system was designed to compensate target fields of ±50 for homogeneous fields, and ±20/ for the first-order gradients. It also aimed at a homogeneity of a few in the volume of interest.
The prototype was built on an aluminum-profile frame of 1.3 m × 2.3 m × 1.3 m in x-, y-, and z-directions, respectively, providing a grid of squares as shown in Fig. <ref>.
The sensitive, fiducial volume for the target fields was chosen to be a cube of 98 cm × 98 cm × 98 cm, placed asymmetrically in y-direction and centered in x and z, as shown in Fig. <ref> (grey contour), in which a cubic mu-metal shield could be placed. We kept the x-z side of the frame at y=0 completely open, opposite to the front-side seen in Fig. <ref>. This enabled easy access to the inside, e.g., to install a magnetic-field mapping device and the mu-metal, and in addition demonstrated the feasibility of designing and building coils with a more complex, irregular geometry.
The mu-metal cube in the prototype served as the emulation of the MSR of n2EDM concerning the fields on its outside. Its purpose was not to be an efficient magnetic shield but rather to provide a mu-metal surface to affect the fields between the mu-metal and the coil cage. It can be demagnetized using a set of demagnetization coils wound through the cube.
The design method restricted the wires of the coil system to take paths on the grid, similar to the ones shown in the right panel of Fig. <ref>.
As described in Sec. <ref>, each of the coils used three circuits with different values of maximal currents (here: 5, 1, and 0.2).
In total, eight current sources, each feeding three circuits, were used to provide all eight coils with their currents.
As an example, Fig. <ref> shows a simulation of the y-component of the magnetic field produced by the homogeneous y-field coil of the prototype in the x-y midplane. The map depicts deviations of the magnetic field from the target value of 50. The designed and predicted homogeneity of the field did not exceed a few in the sensitive volume. Similar results were obtained for all the coils.
§.§ Performance of the prototype
Validation of the AMS fields.
We built a mapper robot carrying a movable three-axis fluxgate sensor to automatically measure the magnetic field in a large part of the volume inside the coil cage.
In a first characterization, the static performance of the prototype was assessed by comparing the predicted and the measured fields for each coil. As an example, Fig. <ref> shows measurement results for the magnetic field produced by the first-gradient coil G_1 (as defined in Table <ref>) at the central x-z plane, at y = 115 cm. As expected, the coil produces mainly B_x and B_z components of the field. The deviation of the measured fields from the target values do not exceed a few , which was the design goal and have been confirmed for all coils.
Dynamic field stabilisation.
As a next step, we implemented a dynamic field stabilisation to actively suppress variable magnetic-field perturbations.
This mode is based on continuous monitoring of the magnetic-field changes by fluxgate sensors with ±200 range, 1 bandwidth, and ±0.5 accuracy. The sensors were mounted around the volume of interest and a dedicated DAQ system, based on Beckhoff EtherCAT modules <cit.>, was used to read their outputs, and to control the coil currents <cit.>.
The dynamic mode of operation relies on the quality of the feedback matrix 𝐌 (Eq. <ref>), which itself strongly depends on the number and positioning of the fluxgates, requiring optimization.
The optimization without mu-metal is straightforward. It is somewhat more challenging with mu-metal due to its strong position-dependent impact on the magnetic field in its vicinity.
The magnetic fields of the setup with the mu-metal cube were simulated with COMSOL <cit.> and validated by measurements.
With these simulations, the condition number (see Sec. <ref>) of the feedback matrix 𝐌 could be minimized by selecting proper positions for the feedback sensors.
It was found that sufficiently stable performance can be reached with eight sensors placed close to the corners of the mu-metal.
This fits well the intuitive understanding of the effects of mu-metal. Close to the surface of the mu-metal, field components parallel to the surface will be small while the orthogonal component remains.
As the fluxgate magnetometers used for feedback are three-axis devices, this would mean that a fluxgate aligned near a large flat surface can measure the orthogonal field well with one axis, while two axes provide relatively little useful information.
Thus, positioning fluxgates closer to mu-metal edges and corners turns out to be more informative.
With the fluxgates mounted at their optimal positions, the feedback matrix was determined by measuring the magnetic-field components while scanning each coil current separately over the whole available range.
For fields up to ±50, relevant here, the response was found to be linear.
The slopes, obtained by linear regressions for each spatial field component versus current,
correspond to the elements of the matrix 𝐌. They are displayed in Fig. <ref> for the eight 3-axis sensors and eight coils.
Shielding performance of the AMS prototype.
The obtained feedback matrix was used to operate the AMS in dynamic-stabilization mode. To test this regime, magnetic-field perturbations can be generated using a coil placed at some location around the setup.
For this measurement, the square excitation coil with sides around 1 m was used. The coil was oriented perpendicular to the x axis and placed at about x=3 m with its center on the y, z coordinate of the center of the sensitive volume. The current source for the coil was modulated with a waveform generator to produce sinusoidal fields, with an uncompensated, maximal amplitude of about 8 at the central sensor position. The readout bandwidth was 200 and the update rate of the feedback system was around 30.
The AMS shielding performance was characterized by a shielding factor S, defined as the ratio:
S = B_center^on/B_center^off,
where B_center^on/off denotes the magnetic-field value at the center of the sensitive volume (corresponding also to the center of the mu-metal cube) with the active compensation on or off, respectively. The mu-metal cube was not used for the particular measurement explained here. In addition to the feedback sensors, one additional sensor to measure B_center was mounted.
Figure <ref> shows the obtained result for S and its frequency dependence up to 1. The measured shielding factor stays stable around 12 in the low-frequency range. The decrease of the shielding factor above 100 is from the limited response of the system caused by the inductance of the coils on the aluminum frame of the cage.
The frequency dependence of the shielding factor stays the same for similar measurements with the coil positioned at other locations. However, the magnitude of S strongly depends on the distance and orientation of the excitation coil. This can be understood qualitatively, as the magnitude of the higher order gradients of the fields within the sensitive volume depends on distance and orientation of the coil, and cannot in principle be compensated by a first-order AMS. Also, this was quantitatively confirmed using a COMSOL simulation of the system.
In summary, we successfully demonstrated an implementation of the method of simple coil design with the prototype AMS system, achieving the expected static and dynamic performance. The prototype design with an open-side demonstrated that this approach is capable of handling irregular grids, which is important to account for doors and other openings at the n2EDM experiment. Based on this feasibility demonstration, confidence was gained for the design and construction of the much larger AMS for n2EDM.
§ THE AMS SYSTEM FOR N2EDM
The AMS design at the n2EDM experiment was informed by the successfully prototyped new grid-based method, as described in Sec. <ref>. In this section we present the larger-scale AMS design and its technical implementation.
Given the experience with the prototype and the requirements resulting from the mapping of the experimental area, the AMS system for n2EDM was designed. Compared to the prototype, the definition of the grid structure was more constrained by the needs of n2EDM and the available space, much more ampere-turns were necessary for similar field strengths, mechanical stability was more important, and much improved quality control was needed for the construction process.
§.§ AMS coil design
The design of the AMS coil system involved several steps, with iterations:
(i) the choice of a surface and a grid on which the coil system could be constructed around the MSR, taking into account all constraints from experimental needs, equipment and area;
(ii) finding the currents on the grid structure to create the target homogeneous and first-order gradient fields;
(iii)
organization of the currents in loops and coils as well as
simplification of the optimal solution by the exclusion of (simple loop) currents contributing negligibly within the specified uncertainties.
Grid design.
Placing the grid structure as far away as possible from the surface of the MSR ensures better field homogeneity. The main limitation was the size of the experimental area. Similarly, the whole experiment, and the MSR in particular, benefits from a clean, temperature-stabilized environment. The n2EDM experiment therefore must be separated from the main experimental hall and requires a thermal enclosure. The solution was the construction of a wooden house (`thermohouse'), similar in principle to the one of the nEDM experiment <cit.>. The AMS was planned to be installed on the inside of the walls of the thermohouse, which could then almost fill the experimental area. This way, optimal access to the experimental equipment and the coil system was guaranteed. The power dissipation from the coils was studied and taken into account in the lay-out of the air-conditioning system of the enclosure. It was assumed and later verified that the total the dissapated heat was quite stable for all observed external field conditions, even with dynamically controlled currents of the AMS.
The possibility to fix the AMS to the walls and the roof of the thermohouse simplified its construction, given the mechanical stability of the structure, which was designed to carry the additional, substantial weight of the coils.
Figure <ref> shows the final grid of the AMS coil system with dimensions of 10.3 m × 8.6 m × 8.9 m around the MSR. It has rectangular tiles of around 1.5 m average side-length, with a total of 308 tiles, 473 vertices, and 778 edges.
The process of designing the grid was mainly heuristic and required several iterations.
The density of the grid mesh was optimized <cit.> to achieve the best possible field-homogeneity around the MSR, while trying to keep the wiring effort reasonable. To make easy access for doors and other openings for the infrastructure of the experiment, we increased the size of tiles when possible or introduced special pieces, called `connected doors' (see Fig. <ref>). A connected door is a separate grid structure placed on a detachable part, which is electrically part of a coil but, for the optimization of the circuit, topologically separated from the rest of the grid. The design method easily allows such separations.
For maintenance, it is then possible to completely detach the connected door making a larger opening to access the experiment.
Given the irregular layout of the experimental area and the thermohouse, the AMS tiles located at the kink in the wall (see Fig. <ref>, lower right corner of the layout; and Fig. <ref> for the top view) would carry high currents while having little effect on the field quality. Such high currents are not ideal for the AMS system: they increase cable thickness and heating effects, requiring bulky cabling and possibly dedicated cooling.
To solve this, a regularization procedure was used in the optimization, turning the proportionality equation Eq. <ref>
into a system of equations:
[ 𝐁; 0 ]=
[ 𝐌; λ·1 ]·𝐈
where 𝐌 is the proportionality matrix, 𝐁 is the target magnetic field and 𝐈 represents the currents in the tiles. The second equation penalizes large currents. It includes the regularization parameters λ, which must be numerically determined for each coil, so a set of eight λ-values is neccessary for the full system. Optimal λ-values should then produce solutions with low total currents and still achieve the performance goal of -fields.
An example of the λ optimization for the X-coil is shown in Fig. <ref>.
Increasing λ reduces the mean edge-currents and thus the total current, as expected, while λ-values above 10^-8 T/A cause the residual from the target field to diverge. The total current is minimal and constant for λ≤ 6· 10^-9 T/A. Appropriate values for λ were determined for all eight coils.
Choice of optimization volume.
A shell of 20 cm thickness around the outer layer of the MSR was chosen as the volume of interest for the optimization (shown in blue on Fig. <ref>). In the real system, magnetic-field sensors are installed inside of this shell. The sensors need to be placed at a distance from the mu-metal to work reliably and the target fields need only to be reached close to the MSR surface. Making the volume for the target fields larger would require larger currents and smaller grid spacing for the AMS. Within the shell, the AMS field was numerically evaluated on random POIs, drawn from a uniform distribution. A set of 9600 POIs was used in the performance evaluation. It was checked that
the sampling of the MSR surface by these points was sufficiently dense.
Final adjustments.
The application of our method of coil design (Sec. <ref>) yields a large number of simple loops on the predefined grid. These loops are not all equally important for the performance of the AMS. For the practical implementation, the number of loops can be reduced as long as the target fields can be achieved. The initial set of simple coils was ordered by importance with respect to the field intensity they produced in the volume of interest.
The performance of the system was then recalculated for configurations where subsequently the least important simple loops were left out.
As an example, Fig. <ref> shows a comparison of the residual fields at the POI close to the MSR for different numbers of the most important loops of the 4th gradient coil. The target field for the gradients is 5/ and residuals refer to the `full field' configuration producing this gradient. The best solution for this coil had 123 simple loops. However, the performance stays close to optimal down to a reduced number of the 70 most important loops.
This procedure was used for all eight AMS coils, yielding a reduction of 40% in the total number of simple loops while still reaching the target fields.
Complete system.
The final design of the AMS system comprises eight coils matched to the optimized grid. Each of the coils consists of 50-70 simple loops
to achieve the target fields. An example of a subset of the main simple loops of the Y-coil is shown in Fig. <ref>.
As in the prototype, each coil consists of three electrical circuits with large, medium, and small elementary currents. This minimized wiring effort, total weight, and power dissipation, while keeping the number of current sources at a manageable level.
Table <ref> summarizes the chosen elementary currents and the numbers of simple loops for all coils.
The calculated residual fields from the target field of the finalized coils are shown in Fig. <ref> for the so-called `full-field' configuration, in which all coils are powered to simultaneously produce the homogeneous fields of 50 in each spatial direction as well as the five 5/ gradients, see Tab. <ref>. Each coil was designed individually to compensate one of these eight basis fields. Due to the discretization and the simplification inherent to the design method, the fields generated by the coils slightly deviate from their target fields, within the allowed ranges. Such deviations will add up (vectorially) when operating the coils together. Large deviations do not usually occur at the same place and in the same directions, they appear rather more randomly, leading even to some `cross compensation', with the result displayed in Fig. <ref>. The residual of each coil's field from its target value can itself be represented as an expansion in the same basis fields produced by the other coils (plus neglected higher-order fields). Therefore, when measuring the response matrix for the built system, these effects are taken into account automatically.
Other, potentially important aspects of the real-world AMS system are the unavoidable imperfections of the current paths. One of these issues arises from the bending of cable bundles at each vertex. Obviously, these are not 90 degree corners but require bending radius up to 10 cm. Another imperfection is the position of the wires in the bundles with respect to the ideal grid. On some edges of the system, the area crossed by all cables was up to 80 cm^2; necessarily some wires are displaced by several cm from their ideal position. All these effects were simulated and in all cases we concluded that they were tolerable or even negligible. Again, aforementioned cross-compensation helps considerably: while a coil might slightly deviate from its ideal performance, it will still be completely linearly independent of the other seven coils and the system can function almost equally well.
§.§ The AMS technical implementation
The AMS coil system was mounted in the thermohouse of n2EDM over the course of approximately one year. An overview of the construction process is presented here, while more details are found in Ref. <cit.>.
Figure <ref> shows a picture of the finalized system.
The AMS grid structure was formed by cable trays out of stainless steel, mounted directly onto the inner walls of the wooden thermohouse around the MSR. The cable trays were grounded in a way to inhibit closed loops and eddy currents through the trays.
In order to mount
the coils of Table <ref> efficiently onto the grid (Fig. <ref>) along their calculated paths, one could not simply wind long cables. Instead, cables of a simple loop were bundled and installed on the cable trays along the roughly 500 different paths. The ends of the wires of the bundles were carefully crimped to form the loops. All the simple loops for each elementary current were connected in series such that a `coil' consisted of three independent circuits.
The installation of a bundle carefully followed a detailed plan, connecting it at some start position in the thermohouse and following a prescribed path along numbered vertices. The start and the end position of the circuits were later connected to terminals on DIN rails, which themselves were connected appropriately with interconnection wires. At each vertex of the grid the correct direction had to be checked, going straight or around a corner.
A system of bar-codes on the wires and QR-codes near the vertices on the walls, both completed with human-readable names, along with a dedicated smartphone scanning-app, were developed for a continuous verification and quality control during the installation. Completed circuits were electrically checked and DC resistances were measured to guarantee quality of crimp connections.
This way, a total of 55 of wire was installed, without any indications of error.
§.§ AMS current sources
To power the AMS coils, we have designed and built bipolar high-power current sources in-house at PSI, based on APEX-PA93 linear operational amplifiers <cit.>.
Each current source consists of three channels, delivering the elementary currents to the corresponding three circuits of a coil (Fig. <ref>). The currents of all three channels change proportionally to their control voltages, which can be set in the range from -10 to 10. This allows for an efficient realization of the three-fold powering approach described in Sec. <ref>.
For coils with different design currents (see Table <ref>) the software will command them with properly reduced values.
Depending on the channel, up to six APEX amplifiers were connected in parallel and complemented by a system of matched resistors to deliver the required output current and distribute the power dissipation.
An internal stabilization network combined with external damping resistors enables the current source to drive inductive loads up to 1.
This is important because although the self-inductance of the coils ranges only from 3 to 75, their mutual inductance is up to 500.
Each of the current sources is supplied with ±50 from an external switching power supply with a large filter capacitor to ensure a low-noise level of operation. The total heat dissipation in each of the current sources can reach up to 500, which is removed by an efficient built-in cooling system.
As part of the performance verification of the current sources, they were connected to the coils and their responses measured to a square-shaped input signal with the maximum amplitude of 10. The output signals reached their maxima with typical time constants of approximately 80, fast enough for dynamic AMS operation in the sub-Hertz frequency range, as required.
§.§ Fluxgates sensors
Eight 3-axis SENSYS fluxgate sensors <cit.> were installed around the MSR to measure the magnetic field and provide feedback information for the dynamic mode of the AMS.
The initial optimization of their positions was carried out similarly to the one of the prototype (Sec. <ref>) and positions close to the corners of the MSR were found.
In Sec. <ref>, results of the dynamic shielding performance are reported, based on these positions of the fluxgates and the control system, described in the next section.
§.§ Control system
The control system is based on Beckhoff modules ELM3148 (24 bit ADCs) and EL4134 (16 bit DACs) operating at 1kHz.
The readings from all fluxgate channels are stored in the array B_𝐦 that has up to 51 entries.
There is a minimal delay of two cycles for any reaction. Thus, the currents I of the next cycle [i+1] are calculated as:
I[n+1] = I[n] + k× ( M^-1× ( B_𝐭 - B_𝐦[n-1] )),
where B_𝐭 is the target field, normally chosen as 0.
The feedback matrix M^-1 is the pseudo-inverse (calculated offline as explained in Eq. (<ref>)) of the response matrix M.
The latter is obtained by scanning all coil currents individually and analysing their response by linear regression (as described in Sec. <ref>).
A multiplication constant k slows down the feedback to avoid oscillations. We use the same value of 0.013, found empirically, for all eight coils.
This results in a characteristic time constant of about 50.
A faster operation is prevented by the current sources, but was never intended.
While the performance is
satisfactory already, potential improvements will be studied once commissioning of other n2EDM subsystems, which it would interfere with, is completed.
A more detailed simulation model with improved utility for various numerical studies is being deployed, additional fluxgate sensors are being installed, and further optimization of sensor positions and feedback algorithms pursued.
, see Table <ref>.
§ THE AMS PERFORMANCE MEASUREMENTS
After the installation and the commissioning of all AMS coils and power supplies, we validated the static magnetic-field generation and measured the shielding performance of the AMS, as described below.
§.§ Magnetic fields generated by the AMS coils
As the MSR was installed in the experimental area before the AMS system, the actual magnetic fields generated by the AMS coils are not the simple homogeneous and first-order gradient fields as designed, but are modified by the MSR.
Thus, after the quality control described in Sec. <ref>, which guaranteed the proper pathways for the currents, actual magnetic-field measurements were compared to results of a FEM simulation model implemented in COMSOL <cit.>. We used the design fields as imported background fields and the outermost mu-metal surface of the MSR as a 10 cm thick layer of high magnetic permeability. It was verified that above a certain permeability and thickness, the results of the simulation became independent of these details.
As the experimental area around the AMS was already partially occupied by other equipment after its completion, magnetic-field measurements around the MSR had to be done in a sampling mode rather than in form of full field maps (as described for the empty area in Sec. <ref>).
The magnetic fields created by individual coils were measured in some selected, easily accessible areas, usually along a straight aluminum profile with one fluxgate, and compared to the simulation. Figure <ref> shows an example of such a comparison. The measured and the simulated magnetic-field values for the B_x, B_y and B_z components produced by the Y-coil are shown. The current of the Y-coil was pulsed on and off for the measurement to enable proper background-field subtraction. The coil current was chosen to be half of the maximum current. The measurements were taken along the grey line shown in the inset, at a height of z=-1 (below the center of the MSR) in the y-direction at a distance of about 20 cm from the MSR surface.
The result of the measurement agrees with the simulations, and the behaviour of the magnetic-field components is as expected for this example. The design field at the sampled positions without MSR would only have a B_y component, with B_x=B_z=0. One can see this feature emerging for large positive and negative values of y. While the non-existent B_z component is unaffected by the mu-metal shield, the B_y component gets absorbed into the mu-metal by drawing it into the B_x component with maximal amplitude at the edge of the MSR around y≈-2.8m. A similar qualitative and quantitative agreement was observed for the other coils, which confirms a good understanding of the AMS coil system as built.
§.§ AMS shielding measurements
Commissioning measurements of the dynamic AMS shielding were performed outside and inside the MSR during ramps of the superconducting high magnetic-field facility `SULTAN' <cit.>. Magnets of this facility were already of concern to the predecessor nEDM experiment <cit.>. The facility is about 30 m away from the n2EDM experiment. Its magnets can ramp up to 11.5, producing fields up to 40 at the MSR front and back walls mainly in horizontal direction with the AMS system off.
The magnetic fields outside the MSR were measured by the eight 3-axis SENSYS fluxgates <cit.> involved in the feedback, as previously described, and by several additional monitor fluxgates.
The magnetic field inside the MSR was measured by a much more sensitive optically-pumped QuSpin magnetometer (Gen3, zero-field configuration) <cit.>. It was placed roughly at the center of the MSR with one of its two sensitive directions along the z-axis, the most relevant for nEDM measurements, and the other one was oriented along the direction of the largest SULTAN perturbation.
A second QuSpin sensor, installed close to the first one, was used to ensure that field changes could be identified as such and readings of one sensor were not simply due to sensor drift.
Figure <ref> shows magnetic-field values measured during the SULTAN ramps with the AMS in static mode
and dynamic mode, respectively.
In static mode, Fig. <ref> (a), the background field was compensated only approximately some time before the ramp, keeping AMS currents constant.
Thus, the initial spread of the fluxgate readings was not illustrating an optimal zero-field setting.
During the SULTAN ramp, the measured magnetic field changed from several up to roughly 100 , depending on the positions of the fluxgates. The fluxgates positioned near the corners of the MSR, fields get amplified and are at some points much larger than in the empty-area mapping (Sec. <ref>).
When the AMS system is operated in dynamic mode, Fig. <ref> (b), the corresponding magnetic-field changes in the feedback fluxgates are reduced to a level of a few .
The measurements with the QuSpin sensor, shown in Fig. <ref> (c), additionally show the passive shielding of the MSR. As determined with the earlier mapping campaign, the field variation from the SULTAN ramp at the location of the QuSpin sensor would be about 40 without the MSR. A rough analysis of the magnetic field as measured by the QuSpin inside the MSR, during the ramps, found field changes of about 60 – 80 . This would correspond to a quasi-static shielding factor of the MSR of roughly (5-7) × 10^5. It is well known that the magnetic-shielding performance of such magnetic shields improves for larger field variations due to the increase of permeability μ_r for larger magnetic-field strength H until saturation effects set in. It is therefore very important to describe the excitation field when quoting a shielding factor. For this particular example, the MSR of n2EDM has a quasi-static shielding factor of 1 × 10^5 for an excitation field corresponding to ±2 at the unshielded sensor location <cit.>. This is the relevant shielding factor for n2EDM, as -size perturbations will still be possible, even with a perfectly functioning AMS.
When the AMS system is in dynamic mode, the QuSpin sensor measures a more attenuated signal, see Fig. <ref> (d). The amplitude of this remaining field change is about 8, a factor of 7–10 smaller, compared to the SULTAN ramp when the AMS is in static mode. It is, however, not straightforward to take this as the shielding factor of the AMS system alone, as we just saw that the passive, quasi-static shielding performance of the MSR depends on the size of field perturbations on the shield, which in turn depends on AMS performance.
Nevertheless, we can deduce the approximate shielding factor for the combined system of the AMS and the MSR for large and slowly changing perturbations (here a one-hour ramp to 40) as
5 × 10^6. More importantly, the result demonstrates that the goal of suppressing field changes down to below 10 pT inside the MSR was achieved, which was set up as a requirement for the n2EDM experiment.
Another interesting observation from the comparison between
Figs. <ref> (c) and (d) is that at least in this set of measurements it appears that the larger field variation on the outside of the MSR in the static case caused the field inside of the MSR to drift more, about 30–40 , compared to the dynamic case with a drift of less than 10.
One can see, that the drift following the ramp-up in (c) is opposite to the induced change, while the drift following the ramp-down is in opposite direction.
This is expected from the reaction of the mu-metal layers of the MSR to the perturbation.
Such drifts are part of unwanted behaviour of a passive magnetic shield, which, when exposed to large external field variations, slowly absorbs the remanent field until it reaches the state of lowest energy.
The AMS system largely reduces the impact of such effects.
As a final example of the performance, Fig. <ref> shows magnetic-field values measured by the monitor fluxgate during SULTAN ramps, again with and without active feedback compensation by the AMS.
The values of the dominating y-component of the magnetic-field are reduced from about 42 (AMS in static mode) to about 5 (AMS in dynamic mode).
to be discussed:
Interestingly, both, the measurements with the monitor fluxgate and with the QuSpin sensor suggest an AMS shielding factor in dynamic mode of around 8 for the suppression of a SULTAN ramp up to its full field. While these would be consistent with a factorization of static and passive shielding factors for this field variation, it is more likely an accidental coincidence for the given perturbation. Further studies of the combined system of MSR and AMS will be pursued to fully understand its detailed behaviour.
§ SUMMARY
The AMS system was designed and built to compensate homogeneous and first-order gradient external magnetic-field changes around the MSR of the n2EDM experiment. It was developed using a novel method of coil design. After successful prototyping at ETH Zurich, the AMS system was constructed and commissioned at the n2EDM experiment at PSI. First performance measurements demonstrated its ability to suppress magnetic-field changes of about ±50 (homogeneous) and ±5/ (first-order gradients), to the level of a few . The optimization of the AMS system using measurements and improved simulations, e.g., concerning fluxgate positioning and feedback algorithm, is ongoing and might further improve its performance. In any case, with the performance demonstrated in this paper, the combined system of AMS and MSR meets the specifications of the n2EDM experiment, providing a magnetic-field stability within the neutron volume at the 10 pT level.
§ ACKNOWLEDGEMENTS
We gratefully acknowledge the support provided by ETH and PSI technicians, electrical engineers and electricians. In particular, we appreciate the efforts of M. Meier, L. Noorda, A. Angerer, R. Wagner, S. Hug, R. Schwarz,
M. Ettenreich, L. Künzi, D. Di Calafiori, P. Bryan, E. Hüsler, R. Käch, H. Scheppus.
We acknowledge financial support from the Swiss National Science Foundation through projects No. 117696 (PSI),
No. 137664 (PSI), No. 144473 (PSI),
No. 157079 (PSI), No. 172626 (PSI), No. 126562 (PSI), No. 169596 (PSI), No. 178951 (PSI), No. 181996 (Bern), No. 162574 (ETH), No. 172639 (ETH), No. 200441 (ETH).
The group from Jagiellonian
University Cracow acknowledges the support of the National Science Center, Poland,
Grants No. UMO-2015/18/M/ST2/00056, No. UMO-2020/37/B/ST2/02349,
and
No. 2018/30/M/ST2/00319, as well as by the Excellence Initiative – Research University Program at the Jagiellonian University. This work was supported by the Research Foundation-Flanders (BE) under Grant No. G.0D04.21N. Collaborators at the University of Sussex acknowledge support from the School of Mathematical and Physical Sciences, as well as from the STFC under grant ST/S000798/1.We acknowledge the support from the DFG
(DE) on PTB core facility center of ultra-low magnetic field KO 5321/3-1 and TR 408/11-1.
We acknowledge funding provided by the Institute of Physics Belgrade through a grant by the Ministry of Education, Science and Technological Development of the Republic of Serbia. This work is also supported by Sigma Xi grants # G2017100190747806 and
# G2019100190747806, and by the award of the Swiss Government Excellence Scholarships (SERI-FCS) # 2015.0594.
spphys
|
http://arxiv.org/abs/2307.05115v1 | 20230711084850 | Critical steady states of all-to-all driven-dissipative models: An analytic approach | [
"Diego Barberena",
"Ana Maria Rey"
] | quant-ph | [
"quant-ph",
"cond-mat.quant-gas"
] |
JILA, NIST, Department of Physics, University of Colorado, Boulder, CO 80309, USA
Center for Theory of Quantum Matter, University of Colorado, Boulder, CO 80309, USA
JILA, NIST, Department of Physics, University of Colorado, Boulder, CO 80309, USA
Center for Theory of Quantum Matter, University of Colorado, Boulder, CO 80309, USA
We analyse the properties across steady state phase transitions of two all-to-all driven-dissipative spin models that describe possible dynamics of N two-level systems inside an optical cavity. We show that the finite size behaviour around the critical points can be captured correctly by carefully identifying the relevant non-linearities in the Holstein-Primakoff representation of spin operators in terms of bosonic variables. With these tools, we calculate analytically various observables across the phase transitions and obtain their finite size scalings, including numerical prefactors. In particular, we look at the amount of spin squeezing carried by the steady states, of relevance for quantum metrology applications, and describe in analytical detail the mechanism by which the optimal spin squeezing acquires logarithmic corrections that depend on the system size. We also demonstrate that the logarithmic nature of these corrections is difficult to characterize through numerical procedures for any experimentally realistic and/or simulable values of particle number. We complement all of our analytical arguments with numerical benchmarks.
Critical steady states of all-to-all driven-dissipative spin models: An analytic approach
Ana Maria Rey
August 12, 2023
==========================================================================================
§ INTRODUCTION
Studying the behaviour of quantum systems in the presence of decoherence and dissipation <cit.> is of paramount importance in the field of quantum technologies, both for practical and fundamental reasons. On the practical side, mitigating the effects of unwanted sources of decoherence is a necessary requirement for applications in quantum metrology <cit.>, simulation <cit.> and computation <cit.>. On the fundamental side, combining dissipation with coherent processes/drives can lead to novel kinds of behaviour <cit.>, both dynamically <cit.> and in steady state conditions <cit.>, and these insights can then be used as tools for more pragmatic endeavors.
An important avenue of research in driven-dissipative systems is devoted to understanding the steady states towards which these systems relax at long times. In particular, these steady states can undergo phase transitions when parameters of the system are varied. The strong reorganization of quantum and classical fluctuations that occurs near these phase transition points can then be utilized for e.g. quantum metrology applications, and the possibility of accessing these resources by just waiting for a system to relax constitutes a very appealing prospect for the preparation of entangled states. Importantly, tuning the system close to a transition point can be done not only by controlling coherent processes but also by deliberately engineering dissipation sources <cit.>.
A class of driven-dissipative systems where steady states are of practical relevance is provided by all-to-all spin models <cit.>, where the generation of steady state spin squeezing <cit.> (useful for e.g. accurate timekeeping <cit.>) is a well-documented effect. The amount of spin squeezing attainable is typically diagnosed using a variety of controlled approximations (mean field theory, Holstein-Primakoff <cit.>, etc.) which give the correct answer when N, the number of spins, goes to infinity. In practice, the optimal squeezing is estimated numerically because it occurs close to phase transition points, where these analytic approaches ordinarily fail <cit.>. The common expectation is that this optimal value shows finite-size scaling and thus improves with N according to a power law dependence. This is also true of generic observables and quantum phase transitions in closed all-to-all systems <cit.>, though in that case renormalization <cit.> and field theory <cit.> techniques have been used to get analytical control.
Building on previous work done for ground state transitions <cit.>, in this paper we show that observables of all-to-all systems close to steady-state phase transition points can be calculated analytically by using the Holstein-Primakoff approximation consistently. In particular, we focus on the optimal spin squeezing in two distinct all-to-all spin models. We find that there are non-power-law corrections in N that arise due to the optimization process and that are unique to the open quantum system setting, where steady states can be mixed. These corrections behave logarithmically when N is very large, but clean observation of this trend requires working with N>10^23 particles, which is outside the scope of any realistic numerical simulation and partly explains the discrepancies in power law exponents reported in the literature <cit.>.
Our work is organized as follows: in Sec. <ref> we introduce the mathematical models that we will study and give a small overview of their common mathematical properties. In Sec. <ref> we analyse an ensemble of atoms interacting with squeezed vacuum light <cit.>, and whose critical properties are known to behave differently depending on the parity of N <cit.>. In line with this, we perform independent analyses for each of these cases. Finally, in Sec. <ref> we study an ensemble of atoms subject to the competition between an external laser drive and collective decay of excitations. This is a model that has been studied in other contexts under the name of cooperative resonance fluorescence <cit.>.
§ MODELS
We consider two different driven-dissipative models described by the following master equations
∂_tρ̂ =Γ'/N𝒟(Ŝ_x-i ζŜ_y)ρ̂
∂_tρ̂ =-i[ΩŜ_x,ρ̂]+Γ/N𝒟(Ŝ^-)ρ̂,
where ρ̂ is the density matrix of the system, 𝒟(Ô)ρ̂=Ôρ̂Ô^†-{Ô^†Ô,ρ̂} is the standard dissipation superoperator, Ŝ_x,y,z=∑_i=1^Nσ̂_x,y,z^i/2 are collective spin operators, σ̂_x,y,z^i are Pauli matrices describing the i^th two-level system, Ŝ^-=Ŝ_x-iŜ_y, and Ω, Γ, Γ' and ζ are system parameters that can be varied to access the different steady state phases present in these models. Steady states ρ̂_ss are defined as the solutions to Eqs. (<ref>) and (<ref>) that satisfy ∂_tρ̂=0.
Both models conserve (in the strong sense <cit.>) the spin length operator, Ŝ^2=Ŝ_x^2+Ŝ_y^2+Ŝ_z^2, which allows us to focus on a single symmetry sector of Hilbert space, characterized by Ŝ^2=(N/2)(N/2+1). The states satisfying this condition are called Dicke states, are symmetric under permutation of the atoms and span an N+1 dimensional representation of the SU(2) algebra generated by Ŝ_x,y,z. This reduction in the size of the relevant sector of Hilbert space means that numerical simulations for large values of N∼ 10^4 are possible, even in the presence of dissipation. Since we will be working exclusively in the Dicke manifold, we will make a small digression to point out important features about these states before addressing the specific properties of the models we will study.
§.§ Properties of Dicke manifold
A typical basis for the Dicke states is given by |m⟩, which are eigenstates of Ŝ_z satisfying Ŝ_z|m⟩=m|m⟩, with |m|≤ N/2. Dicke states can also be represented graphically as a distribution on the surface of a collective Bloch sphere of length N/2 by means of various quasi-probability functions. In this paper we will exclusively use the Husimi distribution, defined as
Q_ρ̂(θ,ϕ)=1/4π⟨θ,ϕ|ρ̂|θ,ϕ|,⟩
where θ and ϕ are elevation and azimuthal angles in spherical coordinates, and |θ,ϕ⟩ are spin coherent states <cit.>:
|θ,ϕ⟩=(cosθ/2)^Ne^tan(θ/2) e^iϕŜ^-|m=N/2⟩.
In particular, the Husimi function of the spin coherent state |θ,ϕ⟩ is highly concentrated along the θ,ϕ direction on the Bloch sphere and is rotationally symmetric around this direction. The distribution is of size ∼√(N) transverse to the Bloch vector direction.
§ MODEL I: SQUEEZED DECAY (SDM)
We will call the model given in Eq. (<ref>), which we reproduce here
∂_tρ̂=Γ'/N𝒟(Ŝ_x-i ζŜ_y)ρ̂,
as the Squeezed Decay Model (SDM). It was introduced in Refs. <cit.> to describe a system of two-level emitters incoherently driven by broadband squeezed vaccuum light.
This model can also be engineered inside QED cavities <cit.>. In the proposal of Ref. <cit.>, this is achieved by driving two-photon Raman transitions between two degenerate atomic states where one of the Raman legs is provided by a cavity mode. The effects of photon loss through the cavity mirrors lead to an effective atom-only model described by Eq. (<ref>), and the presence of collective atomic operators Ŝ_x,y,z reflects the fact that the photons escaping the cavity do not carry information about which atoms they were emitted from. This also provides an example of a model that relaxes to an entangled dark state by an adequate engineering of dissipation processes <cit.>.
In the thermodynamic limit N→∞, the model displays a first-order phase transition at ζ=0, as illustrated in Fig. <ref>:
* When -1<ζ<0, the steady state is a large spin pointing along the +z direction: ⟨Ŝ_z|=⟩N/2 and ⟨Ŝ^-|=⟩0.
* When 0<ζ<1, the steady state is a large spin pointing along the -z direction: ⟨Ŝ_z|=⟩-N/2 and ⟨Ŝ^-|=⟩0.
|
http://arxiv.org/abs/2307.04181v1 | 20230709141335 | Central limit theorem for temporal average of backward Euler--Maruyama method | [
"Diancong Jin"
] | math.NA | [
"math.NA",
"cs.NA",
"math.PR"
] |
CLT]Central limit theorem for temporal average of backward Euler–Maruyama method
School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074, China;
Hubei Key Laboratory of Engineering Modeling and Scientific Computing, Huazhong University of Science and Technology, Wuhan 430074, China
[email protected]
This work is supported by National Natural Science Foundation of China (No. 12201228), and the Fundamental Research Funds for the Central Universities 3004011142.
This work focuses on the temporal average of the backward Euler–Maruyama (BEM) method, which is used to approximate the ergodic limit of stochastic ordinary differential equations with super-linearly growing drift coefficients.
We give the central limit theorem (CLT) of the temporal average, which characterizes the asymptotics in distribution of the temporal average. When the deviation order is smaller than the optimal strong order, we directly derive the CLT of the temporal average through that of original equations and the uniform strong order of the BEM method. For the case that the deviation order equals to the optimal strong order, the CLT is established via the Poisson equation associated with the generator of original equations. Numerical experiments are performed to illustrate the theoretical results.
[
Diancong Jin
August 12, 2023
===================
AMS subject classifications: 60H35, 60F05, 60H10
§ INTRODUCTION
Ergodic theory is a powerful tool to investigate the long-time dynamics and
statistical properties of stochastic systems, which is widely applied in physics, biology, chemistry and so on(see, e.g., <cit.>). A crucial problem in ergodic theory is to determine the ergodic measure and ergodic limit. Since explicit expressions of them are generally unavailable, one usually resorts to numerical methods to obtain their approximations. There have been lots of numerical methods which inherit the ergodicity or approximate the ergodic limit of original systems; see <cit.> and references therein. In the aforementioned work, main efforts are made to analyze the error between the numerical
invariant measure and the original one, and that between numerical temporal average and the ergodic limit.
Besides the convergence of the numerical temporal average in the moment sense, the asymptotics of its distribution is also an essential property. In recent several work, the central limit theorem (CLT) of the temporal average of some numerical methods is given, which characterizes the fluctuation of the numerical temporal average around ergodic limits of original systems in the sense of distribution. In <cit.>, the CLT of the temporal average of the Euler-Maruyama (EM) method with decreasing step-size for ergodic stochastic ordinary differential equations (SODEs) is given. In addition, <cit.> proves the CLT and moderate deviation of the EM method with a fixed step-size for SODEs. For a class of ergodic stochastic partial differential equations (SPDEs), <cit.> shows that the temporal average of a full discretization with fixed temporal and spatial step-sizes satisfies the CLT. In the existing work, the CLT of numerical temporal average is established provided that coefficients of original equations are Lipschitz continuous. This motivates us to investigate the CLT of numerical temporal average for stochastic systems with non-Lipschitz coefficients, which have more extensive applications in reality compared with the Lipschitz case.
In this work, we consider the following SODE
X(t)=b(X(t)) t+σ(X(t)) W(t), t>0,
where {W(t),t≥ 0} is a D-dimensional standard Brownian motion defined on a complete filtered probability space (Ω, F,{ F_t}_t≥0, P), and b: R^d→ R^d and σ: R^d→ R^d× D satisfy Assumptions <ref>-<ref> such that (<ref>) admits a unique strong solution on [0,+∞) for any given deterministic initial value X(0)∈ R^d. Notice that our assumptions allow b to grow super-linearly. It is shown in <cit.> that (<ref>) admits a unique invariant measure π and is thus ergodic, due to the strong dissipation condition on b. In order to inherit the ergodicity of (<ref>) and approximate the ergodic limit π(h):=∫_ R^dh(x)π( x), h∈ C_b( R^d), <cit.> discretizes (<ref>) by the backward Euler–Maruyama (BEM) method (see (<ref>)), and gives the error between the numerical invariant measure π_τ and π with τ being the step-size. The above result together with the strong order of the BEM method in the infinite time horizon implies that the temporal average 1/N∑_k=0^Nh(X̅_k^x) converges to the ergodic limit π(h), i.e.,
lim_τ→0lim_N→+∞|1/N∑_k=0^N Eh(X̅_k^x)-π(h)|=0,
where {X̅^x_n}_n≥ 0 is the numerical solution generated by the BEM method with initial value x∈ R^d.
The purpose of this paper is to present the CLT for the following temporal average
Π_τ,α(h)=1/τ^-α∑_k=0^τ^-α-1h(X̅^x_k), α∈(1,2], h∈ C^4_b( R^d),
where for convenience we always assume that τ^-α is an integer instead of the step number N in (<ref>). More precisely, we prove in Theorems <ref> and <ref> that the normalized temporal average 1/τ^α-1/2(Π_τ,α(h)-π(h)) converges to the normal distribution N(0,π(|σ^⊤∇φ|^2)) in distribution as τ→ 0, respectively for α∈(1,2) and α=2. In fact, Theorem <ref> indicates that the CLT holds for the temporal average of a class of numerical methods with uniform strong order 1/2, for α∈(1,2).
Here, φ is defined by (<ref>) and solves the Poisson equation Lφ=h-π(h) (see Lemma <ref>), with L being the generator of (<ref>). We call the parameter τ^α-1/2 the deviation scale and α-1/2 the deviation order; see Remark <ref> for the reason of requiring α>1.
The proof ideas of the CLT for Π_τ,α(h) are different for α∈(1,2) and α=2. For the case α∈(1,2), we directly derive the CLT for Π_τ,α(h) in Theorem <ref>, by means of the CLT for (<ref>) and the optimal strong order in the infinite time horizon of the BEM method, considering that the CLT for (<ref>) is a classical result (see <cit.>. The key of this proof lies in that the deviation order α-1/2 is smaller than the optimal strong order 1/2 for α∈(1,2), which does not apply to the case α=2. In order to tackle the more subtle case α=2, we follow the argument in <cit.> and <cit.> to obtain the CLT for Π_τ,2(h). The main idea is to reformulate the normalized temporal average 1/τ^α-1/2(Π_τ,α(h)-π(h)) by means of the Poisson equation. This allows us to decompose 1/τ^α-1/2(Π_τ,α(h)-π(h)) as a martingale difference series sum converging to N(0,π(|σ^⊤∇φ|^2)) in distribution, and a negligible remainder converging to 0 in probability. In this proof, the pth (p>2) moment boundedness of the BEM method in the infinite time horizon and the regularity of the solution to the Poisson equation play important roles, where the former has not been reported for SODEs with non-Lipschitz coefficients to the best our knowledge.
To sum up, the contributions of this work are twofold. Firstly, we give the CLT for the temporal average of the BEM method, which generalizes the existing results to SODEs with super-linearly growing drift coefficients. Secondly, we prove the pth (p>2) moment boundedness of the BEM method in the infinite time horizon for the original equation. The rest of this paper is organized as follows. In Section <ref>, we give our assumptions and recall some basic properties of the exact solution. Section <ref> presents our main results and proves the CLT for Π_τ,α(h) with α∈(1,2), and Section <ref> gives the proof of the CLT for Π_τ,2(h). Some numerical tests are displayed to illustrate the theoretical results in Section <ref>. Finally, we give the conclusions and future aspects in Section <ref>.
§ PRELIMINARIES
In this section, we give our main assumptions on the coefficients of (<ref>) and present some basic properties for (<ref>). We begin with some notations. Denote by |·| the 2-norm of a vector or matrix, and by ·,· the scalar product of two vectors. Let d,m,k∈ N^+ with N^+ denoting the set of positive integers. For matrix A,B∈ R^d× m, denote A,B_HS:=∑_i=1^d∑_j=1^mA_ijB_ij and A_HS:=√( A,A_HS). Let B( R^d) stand for the set of all Borel sets of R^d. Denote by P( R^d) the space of all probability measures on R^d. Denote μ(f)=∫_ R^df(x)μ( x) for μ∈ P( R^d) and μ-measurable function f.
For convenience, we set F_t=σ(W(s),0≤ s≤ t). Moreover, d⟶ denotes the convergence in distribution of random variables and w⟶ denotes the weak convergence of probabilities in P( R^d).
Denote by C( R^d; R^m) (resp. C^k( R^d; R^m)) the space consisting of continuous (resp. kth continuously differentiable) functions from R^d to R^m. Let C_b^k( R^d; R^m) stand for the set of bounded and kth continuously differentiable functions from R^d to R^m with bounded derivatives up to order k. Denote by C_b( R^d; R^m) the set of bounded and continuous functions from R^d to R^m.
When no confusion occurs, C( R^d; R^m) is simply written as C( R^d), and so are C_b( R^d; R^m), C^k( R^d; R^m) and C^k_b( R^d; R^m).
For l∈ N^+, denote by Poly(l, R^d) the set of functions growing polynomially with order l, i.e.,
Poly(l, R^d):={g∈ C( R^d; R):|g(x)-g(y)|≤ K(g)(1+|x|^l-1+|y|^l-1)|x-y| for some K(g)>0 }. For f∈𝐂^k(ℝ^d;ℝ), denote by ∇^k f(x)(ξ_1,…,ξ_k) the kth order Gǎteaux derivative along the directions ξ_1,…,ξ_k ∈ℝ^d, i.e., ∇^k f(x)(ξ_1,…,ξ_k)=∑_i_1,…,i_k=1^d ∂^k f(x)/∂ x^i_1⋯∂ x^i_k ξ_1^i_1⋯ξ_k^i_k. For f=(f_1,…,f_m)^⊤∈𝐂^k(ℝ^d; ℝ^m), denote ∇ ^kf(x)(ξ_1,…,ξ_k)=(∇ ^kf_1(x)(ξ_1,…,ξ_k),…,∇ ^kf_m(x)(ξ_1,…,ξ_k))^⊤. The Gǎteaux derivative for a matrix-valued function is defined as previously. For f∈ C^k( R^d; R) the notation ∇ ^kf(x) is viewed as a tensor, i.e., a multilinear form defined on ( R^d)^⊗^k. And ·_⊗ denotes the norm of a tensor. Throughout this paper, let K(a_1,a_2,...,a_m) denote some generic constant dependent on the parameters a_1,a_2,...,a_m but independent of the step-size τ, which may
vary for each appearance.
§.§ Settings
Let us first give the assumptions on b and σ.
There exist constants L_1, L_2∈(0,+∞) such that
σ(u_1)-σ(u_2)_HS≤ L_1|u_1-u_2| ∀ u_1,u_2∈ R^d,
σ(u)_HS≤ L_2 ∀ u∈ R^d.
There exist c_1>15/2L_1^2, L_3>0 and q≥ 1 such that
u_1-u_2,b(u_1)-b(u_2)≤ -c_1|u_1-u_2|^2 ∀ u_1,u_2∈ R^d,
|b(u_1)-b(u_2)|≤ L_3(1+|u_1|^q-1+|u_2|^q-1)|u_1-u_2| ∀ u_1,u_2∈ R^d.
The above two assumptions ensure the well-posedness of (<ref>); see e.g., <cit.>. And the generator of (<ref>) is given by
Lf(x)=∇ f(x),b(x)+1/2∇^2f(x),σ(x)σ(x)^⊤_HS, f∈ C^2( R^d; R).
Notice that trace(∇^2 f(x)σ(x)σ(x)^⊤)=∇^2 f(x),σ(x)σ(x)^⊤_HS.
As an immediate result of (<ref>),
|b(u)|≤ L_4(1+|u|^q) ∀ u∈ R^d
for some L_4>0. In addition, it is straightforward to conclude from Assumptions <ref>-<ref> that for any l_2>0,
2 u_1-u_2,b(u_1)-b(u_2)+15σ(u_1)-σ(u_2)_HS^2≤ -L_5|u_1-u_2|^2 ∀ u_1,u_2∈ R^d,
2 u,b(u)+l_2σ(u)_HS^2≤ -c_1|u|^2+1/c_1|b(0)|^2+l_2L_2^2 ∀ u∈ R^d,
where L_5:=2c_1-15L_1^2. Note that Assumptions <ref>-<ref> in this paper imply Assumptions 2.1-2.4 in <cit.>, by taking A=-ε I_d, f(x)=b(x)+ε x and g(x)=σ(x) in <cit.> with ε small enough. Thus, all conclusions in <cit.> apply to our case provided that Assumptions <ref>-<ref> hold.
In order to give the regularity of the solution to the Poisson equation, we need the following assumption.
Let σ∈ C_b^4( R^d) and b∈ C^4( R^d). In addition, there exist q'≥ 1 and L_6>0 such that for i=1,2,3,4,
∇^i b(u)_⊗≤ L_6(1+|u|^q') ∀ u∈ R^d.
Under Assumptions <ref>-<ref>, it holds that
2 v,∇ b(u)v+15∇σ(u)v^2_HS≤ -L_5|v|^2 ∀ u,v∈ R^d.
In fact, it follows from (<ref>) that for any u,v∈ R^d and t∈ R, 2t v,b(u+tv)-b(u)+15σ(u+tv)-σ(u)^2_HS≤ -L_5t^2|v|^2. Then the Taylor expansion yields that for any t∈ R, 2t^2 v,∇ b(u)v+15t^2∇σ(u)v^2_HS+ O(t^3)≤ -L_5t^2|v|^2, which implies (<ref>).
Next, we recall some basic knowledge about the invariant measure and ergodicity. Denote by X^s,x(t) the solution to (<ref>) at time t, starting from X(s)=x. Especially, denote X^x(t):=X^0,x(t). Let π_t(x,·) denote the transition probability kernel of {X(t)}_t≥ 0, i.e., π_t(x,A)= P(X^x(t)∈ A) for any A∈ B( R^d). For any ϕ∈ B_b( R^d) and t≥0, define the operator P_t: B_b( R^d)→ B_b( R^d) by (P_tϕ)(x):= Eϕ(X^x(t))=∫_ Rϕ(y)π_t(x, y). Then, {P_t}_t≥ 0 is a Markov semigroup on B_b( R^d). Here, B_b( R^d) is the space of all bounded and measureale functions.
A probability measure μ∈ P( R^d) is called an invariant measure of {X(t)}_t≥ 0 or {P_t}_t≥0, if
∫_ R^d P_tϕ(x)μ ( x)= ∫_ R^dϕ(x)μ ( x) ∀ ϕ∈ B_b( R^d), t≥ 0.
Further, an invariant measure μ is called an ergodic measure of {X(t)}_t≥ 0 or {P_t}_t≥0, if for any ϕ∈ B_b( R^d),
lim_T→+∞1/T∫_0^TP_tϕ(x) t=∫_ R^dϕ(x)μ ( x) in L^2( R^d,μ),
where L^2( R^d,μ) is the space of all square integrable functions with respect to (w.r.t.) μ. Especially, if μ is the unique invariant measure of {X(t)}_t≥ 0, then μ is also the ergodic measure.
We refer readers to <cit.> for more details.
Let Assumptions <ref>-<ref> hold. Then, the following hold.
(1) For any p≥ 1, sup_t≥ 0 E|X^x(t)|^p≤ K(p)(1+|x|^p).
(2) For any t,s≥ 0, ( E|X^x(t)-X^x(s)|^2)^1/2≤ K(1+|x|^q)|t-s|^1/2.
(3) For any t≥ 0, ( E|X^x(t)-X^y(t)|^2)^1/2≤ |x-y|e^-L_5t/2.
The first and second conclusions come from <cit.>. And the third conclusion can be obtained by applying the Itô formula. In addition, <cit.> gives the ergodicity for {X(t)}_t≥ 0.
Let Assumptions <ref>-<ref> hold. Then we have the following.
(1) {X(t)}_t≥ 0 admits a unique invariant measure π∈ P( R^d).
(2) For any p≥ 1, π(|·|^p)<+∞.
(3) There is λ_1>0 such that for any f∈ Poly(l, R^d), l≥1 and t≥ 0,
| Ef(X^x(t))-π(f)|≤ K(f)(1+|x|^l)e^-λ_1t.
It follows from <cit.> that {X(t)}_t≥0 admits a unique invariant measure π∈ P( R^d), and π_t(x,·)w⟶π as t→+∞ for any x∈ R^d. Especially, π_t(0,·)w⟶π, which implies that for any M>0,
∫_ R^d(|x|^p∧ M)π( x) =lim_t→+∞∫_ R^d(|x|^p∧ M)π_t(0, x)
≤ M∧lim sup_t→+∞ E|X^0(t)|^p≤ K,
where we used |·|^p∧ M∈ C_b( R^d) and Proposition <ref>(1). Then the Fatou lemma gives
π(|·|^p)=∫_ R^d|x|^p π( x)≤lim inf_M→+∞∫_ R^d(|x|^p∧ M)π ( x)≤ K.
For any M>0 and f∈ Poly(l, R^d), it holds that f∧ M∈ C_b( R^d). Accordingly, it follows from the definition of the invariant measure (see (<ref>)) that
π(f∧ M)=∫_ R^d P_t(f∧ M)(y)π ( y).
Thus, using Proposition <ref>(2), the Hölder inequality, the fact |a∧ b-a∧ c|≤ |b-c| and the second conclusion, we conclude that for any M>0,
| E(f(X^x(t))∧ M)-π(f∧ M)|=|P_t(f∧ M)(x)-∫_ R^d P_t(f∧ M)(y)π ( y)|
= |∫_ R^d[P_t(f∧ M)(x)-P_t(f∧ M)(y)]π ( y)|
≤ ∫_ R^d| E(f(X^x(t))∧ M)- E(f(X^y(t))∧ M)|π ( y)
≤ ∫_ R^d E|f(X^x(t))-f(X^x(y))|π( y)
≤ K(f)∫_ R^d(1+( E|X^x(t)|^2l-2)^1/2+( E|X^y(t)|^2l-2)^1/2)( E|X^x(t)-X^y(t)|^2)^1/2π ( y)
≤ K(f)e^-L_5/2t∫_ R^d(1+|x|^l-1+|y|^l-1)|x-y|π( y)
≤ K(f)e^-L_5/2t(1+|x|^l).
The above formula and the monotone convergence theorem lead to (<ref>), which completes the proof.
§ MAIN RESULTS
In this section, we give our main result, i.e., the CLT for the temporal average Π_τ,α(h) of the BEM method used to approximate the ergodic limit π(h). The BEM method has been widely applied to approximating SODEs or SPDEs with non-Lipschitz coefficients; see e.g., <cit.> and references therein.
Let τ>0 be the temporal step-size. The BEM method for (<ref>) reads
X̅_n+1=X̅_n+b(X̅_n+1)τ+σ(X̅_n)Δ W_n, n=0,1,2,…,
where Δ W_n:=W(t_n+1)-W(t_n) with t_n=nτ. We denote by X̅^k,x_n the solution to (<ref>) at the nth step provided X̅_k=x. Especially, denote X̅_n^x:=X̅^0,x_n, i.e., the solution to (<ref>) with the initial value x∈ R^d.
The following are some known results about (<ref>), which can be found in Lemmas 4.1-4.2 and Theorems 4.2 in <cit.>.
Let Assumptions <ref>-<ref> hold and τ sufficiently small. Then the following properties hold.
(1) sup_n≥ 0 E|X̅^x_n|^2≤ K(1+|x|^2).
(2) There is ξ_1>0 such that for any n≥ 0,
( E|X̅^x_n-X̅^y_n|^2)^1/2≤ K|x-y|e^-ξ_1nτ.
(3) sup_n≥ 0 E|X^x(t_n)-X̅^x_n|^2≤ K(x)τ.
Recall that the temporal average of the BEM method is
Π_τ,α(h)=1/τ^-α∑_k=0^τ^-α-1h(X̅^x_k), α∈(1,2], h∈ C^4_b( R^d).
Define the function φ: R^d→ R by
φ(x)=-∫_0^∞ E(h(X^x(t))-π(h)) t, x∈ R^d ,
which is indeed a solution to the Poisson equation Lφ=h-π(h) due to Lemma <ref>. Then we have the following CLT for Π_τ,α(h), α∈(1,2).
Let Assumptions <ref>-<ref> hold and h∈ C_b^4( R^d).
(1) Let {Y_n}_n≥0 be a numerical solution for (<ref>). Suppose that there is K>0 independent of τ such that
sup_n≥ 0 E|X(t_n)-Y_n|^2≤ Kτ.
Then for any α∈(1,2),
1/τ^α-1/2(1/τ^-α∑_k=0^τ^-α-1h(Y_k)-π(h))d⟶ N(0,π(|σ^⊤∇φ|^2)) as τ→ 0.
(2) For any α∈(1,2) and x∈ R^d,
1/τ^α-1/2(Π_τ,α(h)-π(h))d⟶ N(0,π(|σ^⊤∇φ|^2)) as τ→ 0.
Let φ be that in (<ref>). By Lemma <ref>, it holds that φ∈ C^3( R^d) and
Lφ=h-π(h).
It follows from <cit.> that the CLT holds for (<ref>), i.e.,
1/√(T)∫_0^T(h(X(t))-π(h)) td⟶ N(0,-2π(φ Lφ)) as T→∞.
By (<ref>) and a direct computation,
φ Lφ=1/2 Lφ^2-1/2|σ^⊤∇φ|^2.
Since φ^2 belongs to the domain of L, π( Lφ^2)=0 due to <cit.>. Combining the above relations, we have
1/√(T)∫_0^T(h(X(t))-π(h)) td⟶ N(0,π(|σ^⊤∇φ|^2)) as T→∞.
Notice that
1/τ^α-1/2(1/τ^-α∑_k=0^τ^-α-1h(Y_k)-π(h))
= 1/τ^α-1/2(1/τ^-α∑_k=0^τ^-α-1h(Y_k)-τ^α-1∫_0^τ^1-αh(X(t)) t)+τ^α-1/2∫_0^τ^1-α(h(X(t))-π(h)) t
=: J_1(τ)+J_2(τ).
By (<ref>) and α>1, J_2(τ)d⟶ N(0,π(|σ^⊤∇φ|^2)) as τ→ 0. Denoting N=τ^-α,
we use Proposition <ref>(2), (<ref>) and h∈ C_b^1( R^d) to get
E|J_1(τ)|= 1/τ^α-1/2 E|1/N∑_k=0^N-1h(Y_k)-1/Nτ∑_k=0^N-1∫_kτ^(k+1)τh(X(t)) t|
≤ 1/τ^α-1/21/N∑_k=0^N-1 E|h(Y_k)-h(X(t_k))|+1/τ^α-1/21/Nτ∑_k=0^N-1∫_kτ^(k+1)τ E|h(X(t))-h(X(t_k))| t
≤ K(h)1/τ^α-1/2sup_k≥0( E|Y_k-X(t_k)|^2)^1/2+K(h)1/τ^α-1/21/Nτ∑_k=0^N-1∫_kτ^(k+1)τ( E|X(t)-X(t_k)|^2)^1/2 t
≤ K(h)1/τ^α-1/2τ^1/2=K(h)τ^2-α/2.
Thus, lim_τ→0 E|J_1(τ)|=0 due to α<2, which implies that J_1(τ) converges to 0 in probability. Thus, (<ref>) follows by applying the Slutsky theorem.
Finally, (<ref>) holds as a special case of (<ref>) due to Proposition <ref>(3). Thus, the proof is complete.
text
(1) It is observed that
1/τ^α-1/2(Π_τ,α(h)-π(h))=1/τ^1-α/2∑_k=0^τ^-α-1(h(X̅^x_k)-π(h))τ,
which can be viewed as a numerical approximation of 1/√(T)∫_0^T(h(X^x(t))-π(h)) t with T(τ)=Nτ and N=τ^-α. Thus, α>1 is required such that lim_τ→0T(τ)=+∞, which coincides with the CLT for {X(t)}_t≥0.
(2) In fact, we give the CLT of the temporal average for a class of numerical methods satisfying (<ref>) for α∈(1,2). We guess that there may be some non-ergodic numerical method whose temporal average satisfies the CTL in view of Theorem <ref>(1).
We close the section by presenting the CLT for Π_τ,2(h).
Let Assumptions <ref>-<ref> hold and h∈ C^4_b( R^d). Then for any x∈ R^d,
1/√(τ)(Π_τ,2(h)-π(h))d⟶ N(0,π(|σ^⊤∇φ|^2)) as τ→ 0.
As is pointed out in the introduction, the proof idea of Theorem <ref> does apply to the case α=2. Instead, we will use the Poisson equation L φ=h-π(h) to give a good decomposition of Π_τ,2(h), on basis of which the CLT of Π_τ,2(h) can be established.
We postpone the proof of Theorem <ref> to the next section.
§ PROOF OF THEOREM <REF>
§.§ Auxiliary results
Notice that <cit.> gives the second moment boundedness of the BEM method, i.e, Proposition <ref>(1). However, in order to give the CLT for Π_τ,2(h), the pth (p>2) moment boundedness in the infinite time horizon is indispensable. We also refer interested readers to <cit.> for the pth (p>2) moment boundedness in the infinite time horizon for the truncated Euler Maruyama method.
Suppose that Assumptions <ref>-<ref> hold. Then for any r≥ 1 and τ≤ 1,
sup_n≥0 E|X̅^x_n|^r≤ K(r)(1+|x|^r).
It is sufficient to show that for any positive integer p,
sup_n≥0 E|X̅^x_n|^2p≤ K(p)(1+|x|^2p),
in view of the Hölder inequality, which will be derived via mathematical induction.
By (<ref>) and (<ref>),
|X̅^x_n+1|^2-|X̅^x_n|^2+|X̅^x_n+1-X̅^x_n|^2=2X̅^x_n+1,X̅^x_n+1-X̅^x_n
= 2X̅^x_n+1,b(X̅^x_n+1)τ+2X̅^x_n+1-X̅^x_n,σ(X̅^x_n)Δ W_n+2X̅^x_n,σ(X̅^x_n)Δ W_n
≤ -c_1τ|X̅^x_n+1|^2+Kτ+|X̅^x_n+1-X̅^x_n|^2+σ(X̅^x_n)^2_HS|Δ W_n|^2+2X̅^x_n,σ(X̅^x_n)Δ W_n,
which together with the boundedness of σ yields
(1+c_1τ)|X̅^x_n+1|^2≤ |X̅^x_n|^2+Kτ+L_2^2|Δ W_n|^2+2X̅^x_n,σ(X̅^x_n)Δ W_n.
Noting that EX̅^x_n,σ(X̅^x_n)Δ W_n=0, we have
E|X̅^x_n+1|^2≤1/1+c_1τ E|X̅^x_n|^2+Kτ/1+c_1τ.
By iteration, we arrive at
E|X̅^x_n|^2≤1/(1+c_1τ)^n|x|^2+Kτ∑_i=1^∞1/(1+c_1τ)^i≤ |x|^2+K.
Thus, (<ref>) holds for p=1. Now, we assume that
sup_n≥0 E|X̅^x_n|^2(p-1)≤ K(p)(1+|x|^2(p-1)), p≥ 2.
It remains to prove sup_n≥0 E|X̅^x_n|^2p≤ K(p)(1+|x|^2p).
In fact, using (<ref>) and the inequality (1+x)^α≥ 1+α x, α≥ 1, x>-1 leads to
(1+pc_1τ)|X̅^x_n+1|^2p≤(|X̅^x_n|^2+2X̅^x_n,σ(X̅^x_n)Δ W_n +K(τ+|Δ W_n|^2))^p.
Notice that
(|X̅^x_n|^2+2X̅^x_n,σ(X̅^x_n)Δ W_n +K(τ+|Δ W_n|^2))^p
= ∑_i_1=0^p∑_i_2=0^p-i_1C_p^i_1C_p-i_1^i_22^i_2K^p-(i_1+i_2)|X̅^x_n|^2i_1X̅^x_n,σ(X̅^x_n)Δ W_n ^i_2(τ+|Δ W_n|^2)^p-(i_1+i_2)
= |X̅^x_n|^2p+∑_i_1=0^p-1∑_i_2=0^p-i_1-1C_p^i_1C_p-i_1^i_22^i_2K^p-(i_1+i_2)S_n,i_1,i_2+∑_i=0^p-1C_p^i2^p-iT_n,i,
where
S_n,i_1,i_2:=|X̅^x_n|^2i_1X̅^x_n,σ(X̅^x_n)Δ W_n ^i_2(τ+|Δ W_n|^2)^p-(i_1+i_2), i_1∈[0,p-1], i_2∈[0,p-i_1-1],
T_n,i:=|X̅^x_n|^2iX̅^x_n,σ(X̅^x_n)Δ W_n ^p-i, i∈[0,p-1].
For any i_1∈[0,p-1], i_2∈[0,p-i_1-1], it follows from the independence of Δ W_n and X̅^x_n, the boundedness of σ, the Hölder inequality and (<ref>) that for τ≤ 1,
| ES_n,i_1,i_2| ≤ K(p) E|X̅^x_n|^2i_1+i_2 E[|Δ W_n|^i_2(τ+|Δ W_n|^2)^p-(i_1+i_2)]
≤ K(p)( E|X̅^x_n|^2p-2)^2i_1+i_2/2p-2τ≤ K(p)(1+|x|^2p-2)τ.
Next we estimate | ET_n,i| for i=0,…,p-1.
Notice that the property of conditional expectations (see, e.g., <cit.>) leads to
ET_n,p-1 = E[ E_n(|X̅^x_n|^2p-2X̅^x_n,σ(X̅^x_n)Δ W_n )]
= E[( E(|y|^2p-2 y,σ(y)Δ W_n ))|_y=X̅^x_n]=0.
For i=0,…,p-2, applying (<ref>), the boundedness of σ and the Hölder inequality, we get
| ET_n,i|≤ K(p) E|X̅^x_n|^p+i E|Δ W_n|^p-i≤ K(p) ( E|X̅^x_n|^2p-2)^p+i/2p-2τ^p-i/2≤ K(p)(1+|x|^2p-2)τ.
Combining the above formulas gives
E(|X̅^x_n|^2+2X̅^x_n,σ(X̅^x_n)Δ W_n +K(τ+|Δ W_n|^2))^p≤ E|X̅^x_n|^2p+K(p)(1+|x|^2p-2)τ,
which along with (<ref>) yields
E|X̅^x_n+1|^2p≤1/1+pc_1τ E|X̅^x_n|^2p+K(p)(1+|x|^2p-2)τ/1+pc_1τ.
Then by iteration, we deduce
E|X̅^x_n|^2p≤1/(1+pc_1τ)^n|x|^2p+K(p)(1+|x|^2p-2)τ∑_i=1^∞1/(1+pc_1τ)^i≤ K(p)(1+|x|^2p).
Thus, (<ref>) holds by mathematical induction and the proof is complete.
Let Assumptions <ref>-<ref> hold and τ be sufficiently small. Then the BEM method (<ref>) admits a unique invariant measure π_τ∈ P( R^d). Moreover,
for any f∈ Poly(l, R^d), l≥1 and n≥ 0,
| Ef(X̅^x_n)-π_τ(f)| ≤ K(f)(1+|x|^l)e^-ξ_1nτ, x∈ R^d, n≥ 0,
|π_τ(f)-π(f)| ≤ K(f)τ^1/2.
As is shown in <cit.>, {X̅_n}_n≥0 admits a unique invariant measure π_τ, and X̅_n^xd⟶π_τ for any x∈ R^d. Similar to the proof of (<ref>), one can derive (<ref>) based on Proposition <ref>(2) and Theorem <ref>. As for (<ref>), it follows from f∈ Poly(l, R^d), (<ref>), Theorem <ref>, Proposition <ref>(1), Proposition <ref>(3) and (<ref>) that for any n≥ 0 and τ≪ 1,
|π_τ(f)-π(f)|≤ |π_τ(f)- Ef(X̅^0_n)|+| Ef(X̅^0_n)- Ef(X^0(t_n))|+| Ef(X^0(t_n))-π(f)|
≤ K(f)e^-ξ_1nτ+K(f)(1+( E|X̅^0_n|^2l-2)^1/2+( E|X^0(t_n)|^2l-2)^1/2) ( E|X̅^0_n-X^0(t_n)|^2)^1/2+K(f)e^-λ_1 t_n
≤ K(f)(e^-ξ_1nτ+e^-λ_1 t_n)+K(f)τ^1/2.
Letting n→∞ in the above formula yields (<ref>), which finishes the proof.
In order to prove the CLT for Π_τ,2(h), we need to give the regularity of φ. This can be done through a probabilistic approach by means of mean-square derivatives of {X^x(t)}_t≥0 w.r.t. the initial value x.
For any x,y_i∈ R^d, i=1,2,3,4, denote by η^x_y_1(t) the mean-square derivative of X^x(t) along with the direction y_1, i.e., η^x_y_1(t)=lim_ε→01/ε(X^x+ε y_1(t)-X^x(t)) in L^2(Ω; R^d). Further, denote η^x_y_1,y_2(t):=lim_ε→01/ε(η^x+ε y_2_y_1(t)-η^x_y_1(t)) in L^2(Ω; R^d), i.e., η^x_y_1,y_2(t) is the second mean-square derivative of X^x(t) along with the direction y_1 and y_2. η^x_y_1,y_2,y_3(t) and η^x_y_1,y_2,y_3,y_4(t) are defined similarly.
We refer readers to <cit.> for more details about the mean-square differentiablity of SDEs w.r.t. initial values.
Suppose that Assumptions <ref>-<ref> hold. Then there exist C_1,C_2>0 and κ_i>0, i=1,2,3 such that for any x,y_i∈ R^d, i=1,2,3,4 and t≥ 0,
( E|η^x_y_1(t)|^16+κ_1)^1/16+κ_1 ≤ C_1|y_1|e^-C_2t,
( E|η^x_y_1,y_2(t)|^8+κ_2)^1/8+κ_2 ≤ C_1(1+|x|^q')|y_1||y_2|e^-C_2t,
( E|η^x_y_1,y_2,y_3(t)|^4+κ_3)^1/4+κ_3 ≤ C_1(1+|x|^2q')|y_1||y_2||y_3|e^-C_2t,
( E|η^x_y_1,y_2,y_3,y_4(t)|^2)^1/2 ≤ C_1(1+|x|^3q')|y_1||y_2||y_3||y_4|e^-C_2t.
Similarly to <cit.>, η^x_y_1 solves the following variational equation
η^x_y_1(t)=∇ b(X^x(t))η^x_y_1(t) t+∇σ (X^x(t))η^x_y_1(t) W(t),
η^x_y_1(0)=y_1.
Notice that for any p≥2 and matrix A, it holds that ∇ (|x|^p)=p|x|^p-2x and
1/2trace(∇ ^2(|x|^p)AA^⊤)≤1/2p(p-1)|x|^p-2A_HS^2.
For any κ∈(0,1) and λ>0, by the Itô formula, (<ref>), σ∈ C_b^4( R^d) and (<ref>),
E(e^λ t|η^x_y_1(t)|^16+κ)
≤ |y_1|^16+κ+λ∫_0^te^λ s|η_y_1^x(s)|^16+κ s
+1/2(16+κ) E∫_0^te^λ s|η^x_y_1(s)|^14+κ[2η^x_y_1(s),∇ b(X^x(s))η^x_y_1(s) +(15+κ)∇σ(X^x(s))η^x_y_1(s)^2_HS]
≤ |y_1|^16+κ+[λ+(8+κ/2)(-L_5+κ L^2_σ)]∫_0^t E|η^x_y_1(s)|^16+κ s,
where L_σ:=sup_x∈ R^d∇σ(x)_⊗.
Letting κ_1<L_5/L^2_σ, λ_1 small enough, we obtain
E|η^x_y_1(t)|^16+κ_1≤ |y_1|^16+κ_1e^-λ_1 t ∀ t∈[0,T],
which yields the (<ref>).
Secondly, similar to the argument for η^x_y_1, we have
{[ η^x_y_1,y_2(t)= ∇ b(X^x(t))η^x_y_1,y_2(t) t+∇^2 b(X^x(t))(η^x_y_1(t),η^x_y_2(t)) t; +∇σ (X^x(t))η^x_y_1,y_2(t) W(t)+∇^2 σ(X^x(t))(η^x_y_1(t),η^x_y_2(t)) W(t),; η^x_y_1,y_2(0)= 0. ].
For any κ,λ,ε_0∈(0,1), again by the Itô formula, (<ref>), σ∈ C_b^4( R^d) and the elementary inequality (a+b)^2≤ (1+ε_0)a^2+(1+1/ε_0)b^2 with a,b≥ 0, it holds that
E(e^λ t|η^x_y_1,y_2(t)|^8+κ)
≤ λ E∫_0^t e^λ s|η^x_y_1,y_2(s)|^8+κ s+(8+κ) E∫_0^te^λ s|η^x_y_1,y_2(s)|^6+κη^x_y_1,y_2(s),∇ b(X^x(s))η^x_y_1,y_2(s) s
+(8+κ) E∫_0^te^λ s|η^x_y_1,y_2(s)|^6+κη^x_y_1,y_2(s),∇^2 b(X^x(s))(η^x_y_1(s),η^x_y_2(s)) s
+1/2(8+κ)(7+κ) E∫_0^te^λ s|η^x_y_1,y_2(s)|^6+κ∇σ (X^x(s))η^x_y_1,y_2(s)+∇^2 σ(X^x(s))(η^x_y_1(s),η^x_y_2(s))^2_HS s
≤ λ E∫_0^t e^λ s|η^x_y_1,y_2(s)|^8+κ s+1/2(8+κ) E∫_0^t e^λ s|η^x_y_1,y_2(s)|^6+κ[2η^x_y_1,y_2(s),∇ b(X^x(s))η^x_y_1,y_2(s)
+(7+κ)(1+ε_0)∇σ (X^x(s))η^x_y_1,y_2(s)_HS^2] s
+K(κ) E∫_0^te^λ s|η^x_y_1,y_2(s)|^7+κ∇^2 b(X^x(s))_⊗|η^x_y_1(s)||η^x_y_2(s)| s
+K(κ,ε_0) E∫_0^te^λ s|η^x_y_1,y_2(s)|^6+κ|η^x_y_1(s)|^2|η^x_y_2(s)|^2 s.
Further, taking ε_0≪ 1 and using (<ref>), we get
E(e^λ t|η^x_y_1,y_2(t)|^8+κ)
≤ (λ-(4+κ/2)L_5) E∫_0^te^λ s|η^x_y_1,y_2(s)|^8+κ s+K(κ) E∫_0^te^λ s|η^x_y_1,y_2(s)|^7+κ∇^2 b(X^x(s))_⊗|η^x_y_1(s)||η^x_y_2(s)| s
+K(κ) E∫_0^te^λ s|η^x_y_1,y_2(s)|^6+κ|η^x_y_1(s)|^2|η^x_y_2(s)|^2 s.
It follows from the Young inequality ab≤ε a^p+K(ε)b^q with a,b≥ 0, 1/p+1/q=1, p,q>1 and the Hölder inequality that for any ε,ε'>0,
E (|η^x_y_1,y_2(s)|^7+κ∇^2 b(X^x(s))_⊗|η^x_y_1(s)||η^x_y_2(s)|)
≤ ε E|η^x_y_1,y_2(s)|^8+κ+K(ε) E[(∇^2 b(X^x(s))_⊗|η^x_y_1(s)||η^x_y_2(s)|)^8+κ]
≤ ε E|η^x_y_1,y_2(s)|^8+κ+K(ε) ( E|η^x_y_1(s)|^(8+κ)(2+ε'))^1/2+ε'( E|η^x_y_2(s)|^(8+κ)(2+ε'))^1/2+ε'
·( E∇^2 b(X^x(s))_⊗^(8+κ)(1+2/ε'))^ε'/2+ε'.
Taking sufficiently small κ and ε', from Assumption <ref>, Proposition <ref>(1) and (<ref>) it follows that for any ε>0,
E (|η^x_y_1,y_2(s)|^7+κ∇^2 b(X^x(s))_⊗|η^x_y_1(s)||η^x_y_2(s)|)
≤ ε E|η^x_y_1,y_2(s)|^8+κ+K(ε)(1+|x|^(8+κ)q')|y_1|^8+κ|y_2|^8+κe^-Ks.
Similarity, for any ε>0,
E(|η^x_y_1,y_2(s)|^6+κ|η^x_y_1(s)|^2|η^x_y_2(s)|^2)
≤ ε E|η^x_y_1,y_2(s)|^8+κ+K(ε)( E|η^x_y_1(s)|^2(8+κ))^1/2( E|η^x_y_2(s)|^2(8+κ))^1/2
≤ ε E|η^x_y_1,y_2(s)|^8+κ+K(ε)|y_1|^8+κ|y_2|^8+κe^-Ks.
Plugging (<ref>)-(<ref>) into (<ref>), and taking sufficiently small κ_2, λ_2 and ε, one has
E(e^λ_2t|η^x_y_1,y_2(t)|^8+κ_2)≤ -K E∫_0^te^λ_2s|η^x_y_1,y_2(s)|^8+κ_2 s+K(1+|x|^(8+κ_2)q')|y_1|^8+κ|y_2|^8+κ,
which produces (<ref>).
Further, η^x_y_1,y_2,y_3 solves the following SDE
η^x_y_1,y_2,y_3(t)= ∇ b(X^x(t))η^x_y_1,y_2,y_3(t) t+∇^2 b(X^x(t))(η^x_y_1(t),η^x_y_2,y_3(t)) t
+∇^2 b(X^x(t))(η^x_y_2(t),η^x_y_1,y_3(t)) t+∇^2 b(X^x(t))(η^x_y_3(t),η^x_y_1,y_2(t)) t
+∇^3 b(X^x(t))(η^x_y_1(t),η^x_y_2(t),η^x_y_3(t)) t+
∇σ(X^x(t))η^x_y_1,y_2,y_3(t) W(t)
+∇^2 σ(X^x(t))(η^x_y_1(t),η^x_y_2,y_3(t)) W(t)
+∇^2 σ(X^x(t))(η^x_y_2(t),η^x_y_1,y_3(t)) W(t)
+∇^2 σ(X^x(t))(η^x_y_3(t),η^x_y_1,y_2(t)) W(t)
+∇^3σ (X^x(t))(η^x_y_1(t),η^x_y_2(t),η^x_y_3(t)) W(t),
η^x_y_1,y_2,y_3(0)= 0.
By the same argument for deriving (<ref>), using Itô formula, (<ref>) and σ∈ C^4_b( R^d), we have that for any κ,λ∈(0,1),
E(e^λ t|η^x_y_1,y_2,y_3(t)|^4+κ)
≤ (λ-(2+κ/2)L_5) E∫_0^te^λ s|η^x_y_1,y_2,y_3(s)|^4+κ s
+ K(κ) E∫_0^te^λ s|η^x_y_1,y_2,y_3(s)|^3+κ∇^2 b(X^x(s))_⊗(|η^x_y_1(s)||η^x_y_2,y_3(s)|+|η^x_y_2(s)||η^x_y_1,y_3(s)|+|η^x_y_3(s)||η^x_y_1,y_2(s)|) s
+ K(κ) E∫_0^te^λ s|η^x_y_1,y_2,y_3(s)|^3+κ∇^3 b(X^x(s))_⊗|η^x_y_1(s)||η^x_y_2(s)||η^x_y_3(s)| s
+ K(κ) E∫_0^te^λ s|η^x_y_1,y_2,y_3(s)|^2+κ(|η^x_y_1(s)|^2|η^x_y_2,y_3(s)|^2+|η^x_y_2(s)|^2|η^x_y_1,y_3(s)|^2
+|η^x_y_3(s)|^2|η^x_y_1,y_3(s)|^2+|η^x_y_1(s)|^2|η^x_y_2(s)|^2|η^x_y_3(s)|^2) s.
=: (λ-(2+κ/2)L_5) E∫_0^te^λ s|η^x_y_1,y_2,y_3(s)|^4+κ s+I_1(t)+I_2(t)+I_3(t).
It follows from the Young inequality, Hölder inequality, (<ref>)-(<ref>), Assumption <ref> and Proposition <ref>(1) that for sufficiently small κ,ε,ε',
E(|η^x_y_1,y_2,y_3(s)|^3+κ∇^2 b(X^x(s))_⊗|η^x_y_χ(1)(s)||η^x_y_χ(2),y_χ(3)(s)|)
≤ ε E|η^x_y_1,y_2,y_3(s)|^4+κ+K(ε)( E|η^x_y_χ(1)(s)|^(4+κ)(2+ε'))^1/2+ε'( E|η^x_y_χ(2),χ(3)(s)|^(4+κ)(2+ε'))^1/2+ε'
·( E∇^2 b(X^x(s))^(4+κ)(1+2/ε')_⊗)^ε'/2+ε'
≤ ε E|η^x_y_1,y_2,y_3(s)|^4+κ+K(ε)(1+|x|^2q'(4+κ))(|y_1||y_2||y_3|)^4+κe^-Ks,
where (χ(1),χ(2),χ(3)) is any permutation of (1,2,3). Thus, for κ,λ,ε≪1,
I_1(t)≤ K(κ)ε E∫_0^te^λ s|η^x_y_1,y_2,y_3(s)|^4+κ s+K(κ,ε)(1+|x|^2q'(4+κ))(|y_1||y_2||y_3|)^4+κ.
Similarly, it can be verified that for κ,λ,ε≪1,
I_2(t)≤ Kε E∫_0^te^λ s|η^x_y_1,y_2,y_3(s)|^4+κ s+K(ε)(1+|x|^q'(4+κ))(|y_1||y_2||y_3|)^4+κ,
I_3(t)≤ Kε E∫_0^te^λ s|η^x_y_1,y_2,y_3(s)|^4+κ s+K(ε)(1+|x|^q'(4+κ))(|y_1||y_2||y_3|)^4+κ.
Plugging (<ref>)-(<ref>) into (<ref>) yields (<ref>).
Finally, by means of an analogous proof for (<ref>), we obtain (<ref>). Thus, the proof is finished.
Let Assumptions <ref>-<ref> hold and h∈ C_b^4( R^d).
Let φ be the function defined by (<ref>).
Then, for any x∈ R^d,
|φ(x)|≤ K(1+|x|),
∇^i φ(x)_⊗≤ K(1+|x|^(i-1)q'), i=1,2,3,4.
Moreover, φ is a solution to the Poisson equation
L φ=h-π(h).
By (<ref>), |φ(x)|≤ K(h)(1+|x|)∫_0^∞ e^-λ_1 t t≤ K(h)(1+|x|), which indicates that φ is well defined and (<ref>) holds.
Denoting u(t,x):= Eh(X^x(t)), we have that for any x,y_1∈ R^d,
∇_x u(t,x)y_1= E(∇ h(X^x(t))η^x_y_1(t)), due to the definition of η^x_y_1 and h∈ C^1_b( R^d). It follows from (<ref>), h∈ C_b^4( R^d) and the Hölder inequality that
|∇_x u(t,x)y_1|≤ K E|η^x_y_1(t)|≤ K|y_1|e^-C_2t. By the arbitrariness of y_1, |∇_x u(t,x)|≤ Ke^-C_2t, which implies
|∇φ(x)|≤∫_0^∞|∇_x u(t,x)| t≤ K.
Further,
∇ ^2_x u(t,x)(y_1,y_2)= E(∇ h(X^x(t))η^x_y_1,y_2(t)+∇^2 h(X^x(t))(η^x_y_1(t),η^x_y_2(t))) for any x,y_1,y_2∈ R^d. Then (<ref>), h∈ C_b^4( R^d) and the Hölder inequality yield
|∇ ^2_x u(t,x)(y_1,y_2)|≤ K E|η^x_y_1,y_2(t)|+K E|η^x_y_1(t)||η^x_y_2(t)|≤ K(1+|x|^q')|y_1||y_2|e^-C_2t.
This gives ∇ ^2_x u(t,x)_HS≤ K(1+|x|^q')e^-C_2t and thus ∇ ^2 φ(x)_HS≤ K(1+|x|^q').
Similarly, it can be verified that (<ref>) holds for i=3,4.
By the Itô formula,
Eh(X^x(t))=h(x)+∫_0^t E Lh(X^x(s)) s, which gives E Lh(X^x(t))=/ t Eh(X^x(t)), i.e.,
Lu(t,x)=/ tu(t,x).
It follows from (<ref>), (<ref>) and the previous estimates for ∇_x u(t,x) and ∇_x^2u(t,x) that
| Lu(t,x)|≤ K(1+|x|^q+|x|^q')e^-C_2t.
Thus, we can exchange the operator L and the integration in t for L∫_0^∞ (u(t,x)-π(h)) t. Accordingly, using (<ref>) and (<ref>) yields that for any x∈ R^d,
Lφ(x) =-∫_0^∞ Lu(t,x) t=-∫_0^∞/ tu(t,x) t
=u(0,x)-lim_t→+∞u(t,x)
=h(x)-lim_t→+∞ Eh(X^x(t))=h(x)-π(h).
This finishes the proof.
§.§ Detailed proof
In this part, we give the proof of Theorem <ref>. As is mentioned previously, we will split 1/√(τ)(Π_τ,2(h)-π(h)) into a martingale difference series sum and a negligible remainder, based on the Poisson equation (<ref>).
Proof of Theorem <ref>. For the convenience of notations, we denotes m=τ^-2 with τ being sufficiently small. By (<ref>), we have
=1/√(τ)(Π_τ,2(h)-π(h))
=τ^-1/21/m∑_k=0^m-1(h(X̅^x_k)-π(h))=τ^3/2∑_k=0^m-1 Lφ(X̅^x_k)
=τ^1/2∑_k=0^m-1( Lφ(X̅^x_k)τ-(φ(X̅^x_k+1)-φ(X̅^x_k)))+τ^1/2(φ(X̅^x_m)-φ(x)).
Lemma <ref> enables us to apply the Taylor expansion for φ:
φ(X̅^x_k+1)-φ(X̅^x_k)
= ∇φ(X̅^x_k),ΔX̅^x_k+1/2∇^2φ(X̅^x_k),ΔX̅^x_k(ΔX̅^x_k)^⊤_HS
+1/2∫_0^1(1-θ)^2∇^3φ(X̅_k^x+θΔX̅^x_k)(ΔX̅^x_k,ΔX̅^x_k,ΔX̅^x_k)θ,
where ΔX̅^x_k:=b(X̅^x_k+1)τ+σ(X̅^x_k)Δ W_k, k=0,1,…,m.
It follows from (<ref>) and the above formulas that
1/√(τ)(Π_τ,2(h)-π(h))= H_τ+ R_τ,
where H_τ and R_τ are given by
H_τ:=-τ^1/2∑_k=0^m-1∇φ(X̅^x_k),σ(X̅^x_k)Δ W_k, R_τ=∑_i=1^6R_τ,i,
with
R_τ,1:= τ^1/2(φ(X̅^x_m)-φ(x)),
R_τ,2:= -τ^3/2∑_k=0^m-1∇φ(X̅^x_k),b(X̅^x_k+1)-b(X̅^x_k),
R_τ,3:= 1/2τ^1/2∑_k=0^m-1∇^2φ(X̅^x_k),σ(X̅^x_k)(τ I_D-Δ W_kΔ W_k^⊤)σ(X̅^x_k)^⊤_HS,
R_τ,4:= -1/2τ^5/2∑_k=0^m-1∇^2φ(X̅^x_k),b(X̅^x_k+1)b(X̅^x_k+1)^⊤_HS,
R_τ,5:= - τ^3/2∑_k=0^m-1∇^2φ(X̅^x_k),b(X̅^x_k+1)(σ(X̅^x_k)Δ W_k)^⊤_HS,
R_τ,6:= -1/2τ^1/2∑_k=0^m-1∫_0^1(1-θ)^2∇^3φ(X̅^x_k+θΔX̅^x_k)(ΔX̅^x_k,ΔX̅^x_k,ΔX̅^x_k)θ.
By Lemmas <ref>-<ref> below and the Slutsky theorem, 1/√(τ)(Π_τ,2(h)-π(h))d⟶ N(0,π(|σ^⊤∇φ|^2)) as τ→ 0 and the proof is complete. □.
Suppose that Assumptions <ref>-<ref> hold. Then for any x∈ R^d,
H_τd⟶ N(0,π(|σ^⊤∇φ|^2)) as τ→ 0.
Recall that H_τ:=-τ^1/2∑_k=0^m-1∇φ(X̅^x_k),σ(X̅^x_k)Δ W_k with m=τ^-2.
According to <cit.>, it suffices to show that
lim_τ→0τ Emax_0≤ k≤ m-1|Z_k|^2=0,
τ∑_k=0^m-1|Z_k|^2 P⟶π(|σ^⊤∇φ|^2) as τ→ 0,
where Z_k:=∇φ(X̅^x_k),σ(X̅^x_k)Δ W_k, k=0,1,…,m.
It follows from the boundedness of σ and (<ref>) that
τ Emax_0≤ k≤ m-1|Z_k|^2
≤ τ Emax_0≤ k≤ m-1(|Z_k|^2 1_{|Z_k|^2≤ 1})+τ Emax_0≤ k≤ m-1(|Z_k|^2 1_{|Z_k|^2>1})
≤ τ+τ∑_k=0^m-1 E(|Z_k|^2 1_{|Z_k|^2>1})
≤τ +τ∑_k=0^m-1 E|Z_k|^4
≤ τ+Kτ∑_k=0^m-1 E|Δ W_k|^4≤τ +Kτ^3m≤ Kτ,
which implies (<ref>).
By (<ref>), for any x,y∈ R^d,
|∇φ(x)-∇φ(y)|= |∫_0^1∇^2φ(x+θ(y-x))(y-x)θ|≤ K(1+|x|^q'+|y|^q')|x-y|,
which together with the assumptions on σ gives
|σ^⊤∇φ|^2∈ Poly(q'+1, R^d).
As a result of (<ref>), |π_τ(|σ^⊤φ|^2)-π(|σ^⊤φ|^2)|≤ Kτ^1/2.
Thus, once we show that
τ∑_k=0^m-1|Z_k|^2-π_τ(|σ^⊤∇φ|^2) P⟶0 as τ→ 0,
we obtain (<ref>) and complete the proof.
According to (<ref>) and (<ref>),
| E(|σ(X̅_k^x)^⊤∇φ(X̅_k^x)|^2)-π_τ(|σ^⊤∇φ|^2)|≤ K(1+|x|^q'+1)e^-ξ_1kτ, k≥ 0.
By the above formula and the property of conditional expectations, for any j≥ i,
| E_i(|σ(X̅^x_j)^⊤∇φ(X̅^x_j)|^2)-π_τ(|σ^⊤∇φ|^2)|=| E_i(|σ(X̅_j^i,X̅^x_i)^⊤∇φ(X̅_j^i,X̅^x_i)|^2)-π_τ(|σ^⊤∇φ|^2)|
= |( E(|σ(X̅_j^i,y)^⊤∇φ(X̅_j^i,y)|^2)-π_τ(|σ^⊤φ|^2))|_y=X̅^x_i|
≤ K(1+|X̅^x_i|^q'+1)e^-ξ_1(j-i)τ.
Hereafter, we denote by E_i(·) the conditional expectation E(·| F_t_i), i≥ 0.
Further,
E(τ∑_k=0^m-1|Z_k|^2-π_τ(|σ^⊤∇φ|^2)^2= E(τ^2∑_k=0^m-1(τ^-1|Z_k|^2-π_τ(|σ^⊤∇φ|^2))^2
= τ^4∑_i=0^m-1 E(τ^-1|Z_i|^2-π_τ(|σ^⊤∇φ|^2))^2
+2τ^4∑_0≤ i<j≤ m-1 E[(τ^-1|Z_i|^2-π_τ(|σ^⊤∇φ|^2))(τ^-1|Z_j|^2-π_τ(|σ^⊤∇φ|^2))].
It follows from the boundedness of σ, (<ref>), Proposition <ref>(2), (<ref>) and (<ref>) that for τ∈(0,1) and i≥ 0,
≤ E(τ^-1|Z_i|^2-π_τ(|σ^⊤∇φ|^2))^2≤ 2τ^-2 E|Z_i|^4+2(π_τ(|σ^⊤∇φ|^2))^2
≤ K+4(π_τ(|σ^⊤∇φ|^2)-π(|σ^⊤∇φ|^2))^2+4(π(|σ^⊤∇φ|^2))^2
≤ K+Kτ+K(π(|·|^q'+1))^2≤ K.
By the property of conditional expectations,
E_j|Z_j|^2=( E x,Δ W_j^2)|_x=σ(X̅^x_j)^⊤∇φ(X̅^x_j)=τ |σ(X̅^x_j)^⊤∇φ(X̅^x_j)|^2.
Thus, E_i+1|Z_j|^2= E_i+1( E_j|Z_j|^2)=τ E_i+1|σ(X̅^x_j)^⊤∇φ(X̅^x_j)|^2 for any j>i.
Combining the above relation, (<ref>), (<ref>) and Theorem <ref>, we have that for j>i and τ<1,
| E[(τ^-1|Z_i|^2-π_τ(|σ^⊤∇φ|^2))(τ^-1|Z_j|^2-π_τ(|σ^⊤∇φ|^2))]|
= | E[(τ^-1|Z_i|^2-π_τ(|σ^⊤∇φ|^2))(τ^-1 E_i+1|Z_j|^2-π_τ(|σ^⊤∇φ|^2))]|
≤ E[|τ^-1|Z_i|^2-π_τ(|σ^⊤∇φ|^2)|| E_i+1(|σ(X̅^x_j)^⊤∇φ(X̅^x_j)|^2)-π_τ(|σ^⊤∇φ|^2)|]
≤ Ke^-ξ_1(j-i-1)τ E[|τ^-1|Z_i|^2-π_τ(|σ^⊤∇φ|^2)|(1+|X̅^x_i+1|^q'+1)]
≤ Ke^-ξ_1(j-i)τ( E|τ^-1|Z_i|^2-π_τ(|σ^⊤∇φ|^2)|^2)^1/2(1+ ( E|X̅^x_i+1|^2q'+2)^1/2)
≤ K(x)e^-ξ_1(j-i)τ.
Plugging (<ref>)-(<ref>) into (<ref>) yields
E(τ∑_k=0^m-1|Z_k|^2-π_τ(|σ^⊤∇φ|^2)^2
≤ Kτ^4m+K(x)τ^4∑_0≤ i<j≤ m-1 e^-ξ_1(j-i)τ
= Kτ^2+K(x)τ^4∑_i=0^m-1∑_j=i+1^m-1e^-ξ_1(j-i)τ
≤ Kτ^2+K(x)τ^4m∑_j=1^∞e^-ξ_1jτ≤ K(x)τ→0 as τ→ 0,
which leads to (<ref>) and finishes the proof.
Suppose that Assumptions <ref>-<ref> hold. Then for any x∈ R^d, R_τ P⟶0 as τ tends to 0.
We will prove lim_τ→0 E| R_τ|=0 to obtain the conclusion.
Estimate of R_τ,1. By Theorem <ref>, (<ref>) and (<ref>),
E| R_τ,1|≤ Kτ^1/2(1+sup_n≥ 0 E|X̅^x_n|)≤ K(x)τ^1/2.
Estimate of R_τ,2. By means of (<ref>), Assumption <ref>, Theorem <ref>, (<ref>) and the Hölder inequality, we have that for any p≥ 1, i=2,3,4 and j=1,2,
sup_k≥0 E|b(X̅^x_k)|^p ≤ K(1+sup_k≥0 E|X̅^x_k|^pq)≤ K(1+|x|^pq),
sup_k≥0 E∇^j b(X̅^x_k)_⊗^p ≤ K(1+sup_k≥0 E|X̅^x_k|^pq')≤ K(1+|x|^pq'),
sup_k≥0 E∇^iφ(X̅^x_k)_⊗^p ≤ K(1+sup_k≥0 E|X̅^x_k|^(i-1)pq')≤ K(1+|x|^(i-1)pq').
Noting that
b(X̅^x_k+1)-b(X̅^x_k)=∇ b(X̅^x_k)ΔX̅^x_k+∫_0^1(1-θ)∇^2 b(X̅^x_k+θΔX̅^x_k)(ΔX̅^x_k,ΔX̅^x_k)θ,
one obtains from (<ref>) that
R_τ,2= -τ^3/2∑_k=0^m-1∇φ(X̅^x_k),∇ b(X̅_k)σ(X̅^x_k)Δ W_k
-τ^5/2∑_k=0^m-1∇φ(X̅^x_k),∇ b(X̅^x_k)b(X̅^x_k+1)
-τ^3/2∑_k=0^m-1∫_0^1(1-θ)∇φ(X̅^x_k),∇^2b(X̅^x_k+θΔX̅^x_k)(ΔX̅^x_k,ΔX̅^x_k)θ
=: R_τ,2^1+ R_τ,2^2+ R_τ,2^3.
By the property of conditional expectations, for i<j,
E[∇φ(X̅^x_i),∇ b(X̅^x_i)σ(X̅^x_i)Δ W_i∇φ(X̅^x_j),∇ b(X̅^x_j)σ(X̅^x_j)Δ W_j]
= E[∇φ(X̅^x_i),∇ b(X̅^x_i)σ(X̅^x_i)Δ W_i∇φ(X̅^x_j),∇ b(X̅^x_j)σ(X̅^x_j) E_j(Δ W_j)]=0.
The above relation, combined with the boundedness of σ, (<ref>) and (<ref>), gives
E| R_τ,2^1|^2 =τ^3∑_k=0^m-1 E∇φ(X̅^x_k),∇ b(X̅^x_k)σ(X̅^x_k)Δ W_k^2
≤ Kτ^4∑_k=0^m-1 E|∇ b(X̅^x_k)|^2≤ K(x)τ ^2.
Applying the Hölder inequality, (<ref>) and (<ref>)-(<ref>), we have
E| R_τ,2^2|≤ Kτ^5/2∑_k=0^m-1( E|∇ b(X̅^x_k)|^2)^1/2( E| b(X̅^x_k+1)|^2)^1/2≤ K(x)τ^1/2.
Further, for any p≥ 1 and k≥ 0, it follows from the Minkowski inequality, (<ref>) and the boundedness of σ that
( E|ΔX̅^x_k|^p)^1/p≤τ( E|b(X̅^x_k+1)|^p)^1/p+K( E|Δ W_k|^p)^1/p≤ K(1+|x|^q)τ^1/2.
This together with the Hölder inequality, Assumption <ref> and Theorem <ref> yields
E| R_τ,2^3|≤ Kτ^3/2∑_k=0^m-1( E|ΔX̅^x_k|^4)^1/2(1+( E|ΔX̅^x_k|^2q')^1/2+( E|X̅^x_k|^2q')^1/2)≤ K(x)τ^1/2.
In this way, we get E| R_τ,2|≤ ( E| R_τ,2^1|^2)^1/2+ E| R_τ,2^2|+ E| R_τ,2^3|≤ K(x)τ^1/2.
Estimate of R_τ,3. Notice that that for i<j,
E[∇^2φ(X̅^x_i),σ(X̅^x_i)(τ I_D-Δ W_iΔ W_i^⊤)σ(X̅^x_i)^⊤_HS
·∇^2φ(X̅^x_j),σ(X̅^x_j)(τ I_D-Δ W_jΔ W_j^⊤)σ(X̅^x_j)^⊤_HS]
= E[∇^2φ(X̅^x_i),σ(X̅^x_i)(τ I_D-Δ W_iΔ W_i^⊤)σ(X̅^x_i)^⊤_HS
·∇^2φ(X̅^x_j),σ(X̅^x_j) E_j(τ I_D-Δ W_jΔ W_j^⊤)σ(X̅^x_j)^⊤_HS]=0.
Combining (<ref>), (<ref>), the boundedness of σ and (<ref>), we arrive at
E| R_τ,3|^2 =τ/4∑_k=0^m-1 E∇^2φ(X̅^x_k),σ(X̅^x_k)(τ I_D-Δ W_kΔ W_k^⊤)σ(X̅^x_k)^⊤_HS^2
≤ Kτ∑_k=0^m-1 E(∇^2φ(X̅^x_k)^2_HS(τ^2+|Δ W_k|^4))
≤ Kτ∑_k=0^m-1( E∇^2φ(X̅^x_k)^4_HS)^1/2(τ^2+( E|Δ W_k|^8)^1/2)≤ K(x)τ.
Estimate of R_τ,4.
By (<ref>), (<ref>), (<ref>) and the Hölder inequality,
E|R_τ,4| ≤ Kτ^5/2∑_k=0^m-1( E|∇^2φ(X̅^x_k)|^2)^1/2( E|b(X̅^x_k+1)|^4)^1/2≤ K(x)τ^5/2m≤ K(x)τ.
Estimate of R_τ,5. We decompose R_τ,5 (see (<ref>)) into R_τ,5= R_τ,5^1+ R_τ,5^2 with
R_τ,5^1 :=- τ^3/2∑_k=0^m-1∇^2φ(X̅^x_k),(b(X̅^x_k+1)-b(X̅^x_k))(σ(X̅^x_k)Δ W_k)^⊤_HS,
R_τ,5^2 :=- τ^3/2∑_k=0^m-1∇^2φ(X̅^x_k),b(X̅^x_k)(σ(X̅^x_k)Δ W_k)^⊤_HS.
By the Hölder inequality, (<ref>), (<ref>), Theorem <ref> and (<ref>),
E| R_τ,5^1| ≤ Kτ^3/2msup_k≥0( E|∇^2φ(X̅^x_k)|^3)^1/3( E|Δ W_k|^3)^1/3( E|b(X̅^x_k+1)-b(X̅^x_k)|^3)^1/3
≤ K(x)(1+sup_k≥0( E|X̅^x_k|^6q-6)^1/6)sup_k≥0( E|ΔX̅^x_k|^6)^1/6
≤ K(x)τ^1/2.
Similar to (<ref>), one has that for i<j,
E [ ∇^2φ(X̅^x_i),b(X̅^x_i)(σ(X̅^x_i)Δ W_i)^⊤_HS·∇^2φ(X̅^x_j),b(X̅^x_j)(σ(X̅^x_j)Δ W_j)^⊤_HS]=0.
The above formula, combined with (<ref>), (<ref>) and the Hölder inequality, yields
E| R_τ,5^2|^2 =τ^3∑_k=0^m-1 E∇^2φ(X̅^x_k),b(X̅^x_k)(σ(X̅^x_k)Δ W_k)^⊤_HS^2
≤ Kτ^3∑_k=0^m-1( E|∇^2φ(X̅^x_k)|^6)^1/3( E|b(X̅^x_k)|^6)^1/3( E|Δ W_k|^6)^1/3
≤ K(x)τ^2.
Thus, E| R_τ,5|≤ E| R^1_τ,5|+( E| R^2_τ,5|^2)^1/2≤ K(x)τ^1/2.
Estimate of R_τ,6.
Plugging ΔX̅^x_k=b(X̅^x_k+1)τ+σ(X̅^x_k)Δ W_k into (<ref>) gives R_τ,6=∑_i=1^4 R_τ,6^i with
R_τ,6^1 :=-τ^7/2/2∑_k=0^m-1∫_0^1(1-θ)^2∇^3φ(X̅^x_k+θΔX̅^x_k)(b(X̅^x_k+1),b(X̅^x_k+1),b(X̅^x_k+1))θ,
R_τ,6^2 :=-3τ^5/2/2∑_k=0^m-1∫_0^1(1-θ)^2∇^3φ(X̅^x_k+θΔX̅^x_k)(b(X̅^x_k+1),b(X̅^x_k+1),σ(X̅^x_k)Δ W_k)θ,
R_τ,6^3 :=-3τ^3/2/2∑_k=0^m-1∫_0^1(1-θ)^2∇^3φ(X̅^x_k+θΔX̅^x_k)(b(X̅^x_k+1),σ(X̅^x_k)Δ W_k,σ(X̅^x_k)Δ W_k)θ,
R_τ,6^4 :=-τ^1/2/2∑_k=0^m-1∫_0^1(1-θ)^2∇^3φ(X̅^x_k+θΔX̅^x_k)(σ(X̅^x_k)Δ W_k,σ(X̅^x_k)Δ W_k,σ(X̅^x_k)Δ W_k)θ.
Similar to the derivation of (<ref>), one can use (<ref>), (<ref>) and Theorem <ref> to get that for any p≥1 and τ<1,
E∇^3φ(X̅^x_k+θΔX̅^x_k)^p_⊗≤ K(1+|x|^2pq'q), θ∈[0,1].
By (<ref>), (<ref>) and the Hölder inequality, one has
E| R_τ,6^1|≤ K(x)τ^3/2, E| R_τ,6^2|≤ K(x)τ, E| R_τ,6^3|≤ K(x)τ^1/2.
Further, applying the Taylor expansion for ∇^3φ, we write R_τ,6^4= R_τ,6^4,1+ R_τ,6^4,2, where
R_τ,6^4,1 :=-τ^1/2/6∑_k=0^m-1∇^3φ(X̅^x_k)(σ(X̅^x_k)Δ W_k,σ(X̅^x_k)Δ W_k,σ(X̅^x_k)Δ W_k),
R_τ,6^4,2 :=-τ^1/2/2∑_k=0^m-1∫_0^1∫_0^1∇^4φ(X̅^x_k+rθΔX̅^x_k)(σ(X̅^x_k)Δ W_k,σ(X̅^x_k)Δ W_k,σ(X̅^x_k)Δ W_k,ΔX̅^x_k) r θ(1-θ)^2θ.
Similar to the proof for (<ref>), we have that for any p≥1 and τ<1,
sup_k≥0 E∇^4φ(X̅^x_k+rθΔX̅^x_k)^p_⊗≤ K(1+|x|^3pq'q), r,θ∈[0,1].
This together with the Hölder inequality and (<ref>) gives
E| R_τ,6^4,2|≤ Kτ^1/2msup_k≥0[sup_r,θ∈[0,1]( E∇^4φ(X̅^x_k+rθΔX̅^x_k)^3_⊗)^1/3( E|Δ W_k|^9)^1/3( E|ΔX̅_k^x|^3)^1/3]≤ K(x)τ^1/2.
Since Δ W_j is F_t_j-independent, for any i<j,
E[∇^3φ(X̅^x_i)(σ(X̅^x_i)Δ W_i,σ(X̅^x_i)Δ W_i,σ(X̅^x_i)Δ W_i)∇^3φ(X̅^x_j)(σ(X̅^x_j)Δ W_j,σ(X̅^x_j)Δ W_j,σ(X̅^x_j)Δ W_j)]
= E[∇^3φ(X̅^x_i)(σ(X̅^x_i)Δ W_i,σ(X̅^x_i)Δ W_i,σ(X̅^x_i)Δ W_i) E_j[∇^3φ(X̅^x_j)(σ(X̅^x_j)Δ W_j,σ(X̅^x_j)Δ W_j,σ(X̅^x_j)Δ W_j)]]
= 0,
where we used the property of conditional expectations and
E(Δ W_j^p_1Δ W_j^p_2Δ W_j^p_3)=0 ∀ p_1,p_2,p_3∈{1,2,…,D},
with Δ W_j^r being the rth component of Δ W_j. In this way, we get
E| R_τ,6^4,1|^2=τ/36∑_k=0^m-1 E[∇^3φ(X̅^x_k)(σ(X̅^x_k)Δ W_k,σ(X̅^x_k)Δ W_k,σ(X̅^x_k)Δ W_k)]^2≤ K(x)τ^2,
due to (<ref>). Thus, it holds that E| R_τ,6^4|≤ K(x)τ^1/2 for τ<1, which combined with (<ref>) yields E| R_τ,6|≤ K(x)τ^1/2.
Combining the above estimates for R_τ,i, i=1,…,6, we obtain lim_τ→0 E| R_τ|=0. This gives the desired conclusion. □
§ NUMERICAL EXPERIMENTS
In this section, we perform numerical experiments to verify our theoretical results. First, for a given test function h, we obtain the approximation of the ergodic limit π(h) numerically by virtue of the fact lim_t→∞ E(h(X(t)))=π(h) (see (<ref>)). Here, lim_t→∞ E(h(X(t))) is simulated by the numerical solution {X̅_n}_n≥0 of the BEM method. More precisely, let the step-size τ be small enough, N sufficiently large, and use the Monte–Carlo method to simulate the expectation. Then we have
lim_t→∞ E(h(X(t)))≈1/M∑_i=1^Mh(X̅_N^i), with {X̅_N^i}_i=1^M being M samplings of X̅_N. Second, we verify the CLT for Π_τ,α, α∈(1,2]. Denote Z_τ,α(h)= 1/τ^α-1/2(1/τ^-α∑_k=0^τ^-α-1h(X̅_k)-π(h)). Then, the CLT shows that for any f∈ C_b( R^d), lim_τ→0 Ef(Z_τ,α(h))=∫_ R^df(x) N(0,π(|σ^⊤∇φ|^2))( x). We will numerically verify that Ef(Z_τ,α(h)) tends to some constant as τ decreases.
Example 5.1. Consider the following SODE with Lipschitz diffusion coefficient:
X(t)=-(X^3(t)+8X(t)) t+sin(X(t)) W(t),
X(0)=x∈ R.
It is not difficult to verify that the coefficients of the above equation satisfy Assumptions <ref>-<ref>. First, we numerically simulate the ergodic limit π(h) using the aforementioned method. The expectation is realized by 5000 sample paths. Fig. <ref> displays the evolution of Eh(X̅_n) w.r.t. n starting from different initial values. It is observed that the ergodic limit are 1 and 0 for h=sin(x)+1 and h=x^4, respectively.
Tables <ref>-<ref> show the evolution of Ef(Z_τ,2(h)) w.r.t. τ, where the initial value x=1 for Tables <ref>-<ref> while x=-2 for Tables <ref>-<ref>. It is observed that for all kinds of cases, Ef(Z_τ,2(h)) will tend to some constant as τ decreases. We also find that the CLT of Π_τ,2 also holds for h of super-linear growth. See also Section <ref> for the discussion about this problem.
Example 5.2. Consider the following SODE with non-Lipschitz diffusion coefficient:
X(t)=-(X^3(t)+10X(t)) t+0.5X^2(t) W(t),
X(0)=x∈ R.
Notice that the above equation satisfies Assumptions 2.1-2.4 of <cit.>. Thus,
{X(t)}_t≥0 admits a unique invariant measure π.
Fig. <ref> displays the evolution of Eh(X̅_n) w.r.t. n starting from different initial values. In this case, the numerical ergodic limit is 0. Table <ref> reflects the evolution of Ef(Z_τ,α) as τ decreases. It is observed in Table <ref> that Ef(Z_τ,α) will tend to 0 for three different parameters α=1.2,1.5,2. We remark that the CLT may still hold for the BEM method of SODEs with non-Lipschitz diffusion coefficients, as is numerically shown in this example.
§ CONCLUSIONS AND FUTURE
In this work, we prove the CLT for the temporal average of the BEM method, which characterizes the asymptotics of the BEM method in distribution.
The drift coefficients of underlying SODEs are allowed to grow super-linearly. Different proof strategies are used for different deviation orders, which relies on the relationship between the deviation order and optimal strong order of the BEM method.
In fact, it is possible to weaken the conditions of Theorems <ref>-<ref>, and we refer to the following two aspects.
* Conditions on h. By revisiting the whole proof of Theorem <ref>, it is observed that the requirement for the test function h can be lowered. If we let ∇ ^ih∈ Poly(q”, R^d), i=0,1,…,4 instead of h∈ C^4_b( R^d), then the main difference lies in the regularity of φ. In fact, it holds that ∇ ^iφ∈ Poly(L_0, R^d), i=0,1,…,4 for some integer L_0 dependent on q',q”. And this will make no difference to the conclusions of Lemmas <ref> and <ref>, in view of Theorem <ref>. Thus, the CLT still holds for Π_τ,2(h) for a class of unbounded h. Similarly, Theorem <ref> also holds for ∇^i h∈ Poly(q”, R^d), i=0,1,…,4. The above facts
are also observed in the numerical experiments in Section <ref>.
* Conditions on σ. Assume that σ is unbounded but globally Lipschitz. Let Assumption <ref> hold with c_1>15/2L_1^2 replaced by c_1 being sufficiently large. We can follow the same argument in Theorem <ref> to give the pth moment boundedness for the BEM method. Roughly speaking, in this case, (<ref>) still holds. Similar to (<ref>), we obtain
(1+pc_1τ)|X̅^x_n+1|^2p≤(|X̅^x_n|^2+2X̅^x_n,σ(X̅^x_n)Δ W_n +K(τ+|Δ W_n|^2)+K|X̅^x_n|^2|Δ W_n|^2)^p
due to the linear growth of σ. By the similar analysis for (<ref>), one can show that
E|X̅^x_n+1|^2p≤(1+A(p,D)τ)/(1+pc_1τ) E|X̅^x_n|^2p+K(p)(1+|x|^2p-2)τ/(1+pc_1τ)
for some A(p,D)>0 dependent on p and D. Using the condition that c_1 is sufficiently large, one finally can obtain sup_n≥0 E|X̅^x_n|^r≤ K(1+|x|^r) for some r large enough. Thus, other conclusions still hold on basis of the moment boundedness of {X̅_n}_n≥0. Finally, one can establish the CLT for Π_τ,α(h) when σ is Lipschitz, provided that the dissipation parameter c_1 is sufficiently large. When σ is Lipschitz or of super-linear growth, it is interesting to study how to prove the pth (p>2) moment boundedness of the BEM method in the infinite time horizon for a relatively small c_1. We will study this problem in the future.
plain
|
http://arxiv.org/abs/2307.05929v1 | 20230712054921 | A New Dataset and Comparative Study for Aphid Cluster Detection | [
"Tianxiao Zhang",
"Kaidong Li",
"Xiangyu Chen",
"Cuncong Zhong",
"Bo Luo",
"Ivan Grijalva Teran",
"Brian McCornack",
"Daniel Flippo",
"Ajay Sharda",
"Guanghui Wang"
] | cs.CV | [
"cs.CV",
"cs.AI"
] |
Chemical freeze-out parametrization with mean field repulsive hadron resonance gas model
Deeptak Biswas
August 12, 2023
===========================================================================================
Aphids are one of the main threats to crops, rural families, and global food security. Chemical pest control is a necessary component of crop production for maximizing yields, however, it is unnecessary to apply the chemical approaches to the entire fields in consideration of the environmental pollution and the cost. Thus, accurately localizing the aphid and estimating the infestation level is crucial to the precise local application of pesticides. Aphid detection is very challenging as each individual aphid is really small and all aphids are crowded together as clusters. In this paper, we propose to estimate the infection level by detecting aphid clusters. We have taken millions of images in the sorghum fields, manually selected 5,447 images that contain aphids, and annotated each aphid cluster in the image. To use these images for machine learning models, we crop the images into patches and created a labeled dataset with over 151,000 image patches. Then, we implement and compare the performance of four state-of-the-art object detection models.
§ INTRODUCTION
Annually 37% of crops are lost to pest damage and around 13% of crop damage is caused by insects. Most farmers consider utilizing pesticides to eliminate insects and a tremendous amount of funding were applied to pesticides each year. While most of the pesticides are wasteful since only a small portion of used pesticides is employed directly on the insects and most of them are wasted and even pollute the environment.
Under several management scenarios, only a small fraction of areas receive a justified amount of pesticide, while other areas lose yield due to delayed timing and damage by pests, and remaining areas receive a superfluous spray application when there is no pest presence.
However, the development of robotic technology for insecticide application has not been explored primarily due to unavailable camera vision capabilities to locate the pest incidence and severity within a complicated crop canopy. There is an urgent demand for an intelligent application system designed to accurately spray on the infested canopy but only where infestations are present.
Object detection and recognition is one of the most critical components in agricultural robotics, and detecting small insects like aphids can be especially challenging. Convolutional neural network (CNN) was first used in <cit.> for object detection and recognition.
The CNN models have a wide range of applications on medical image analysis <cit.> and object detection <cit.><cit.>.
Nonetheless, aphids are so tiny that even state-of-the-art detection models could not accurately localize them individually.
Most existing aphid detection models <cit.><cit.><cit.> attempt to detect individual aphids, but the performance of the detection models on the aphid dataset is still not that pleasant. In addition, those models are trained on ideal aphid images, it is even harder for them to detect the aphid in real-world scenarios since most aphids are clustered together on the leaves, and accurately dividing and detecting the dense aphids individually is almost impossible. In addition, different illuminations and shades in different images might cause domain shifts which also severely affect the accuracy of CNN models on the tiny aphids detection <cit.><cit.>.
In this paper, we collected millions of images from the sorghum field over two seasons, and then manually selected and annotated 5,447 images affected by aphids. Instead of labeling the aphid individually, we propose annotating aphid colonies as clusters and creating the bounding boxes based on the aphid clusters. Because bounding boxes are always rectangles and cover larger areas, closely located bounding boxes are merged together without affecting aphid detection.
In addition, we implemented and evaluated the performance of four state-of-the-art detection models on the generated aphid dataset.
The study makes it possible to estimate the aphid affection levels from real images to assist farmers to make timely control of affection. The labeled dataset and developed learning models will be freely accessible to the research community on the author's homepage.
§ DATASET GENERATION
§.§ Data Collection
Most of the aphids are located under the leaves and majority of them are densely clustered.
In order to reduce the influence of occlusion among the sorghum leaves, we developed an imaging rig with three GoPro cameras that can capture the canopy leaves at three different heights corresponding to view 1, view 2, and view 3, respectively. Thus, we can enrich the dataset by taking pictures at three different views and capturing aphid clusters from different perspectives. Three sample annotated images corresponding to the three views are shown in Fig <ref>. Using this device, we have captured millions of images over two growing seasons of a sorghum farm. Most of the images are free of aphids. We manually examine all images and eliminate those without aphids, and eventually, select 5,447 images that contain adequate aphids. The percentages of photos corresponding to the three views are shown in Fig <ref>.
§.§ Data Labeling
The aphid clusters in the selected images are manually annotated by professionally trained researchers using Labelbox[https://labelbox.com/].
We first create segmentation masks for each image and then generate detection bounding boxes based on the masks. In total, we have labeled 59,767 aphid clusters.
Aphid Cluster Definition Data labeling is a labor-intensive process. The task is distributed among 8 trained research assistants. Therefore, it is crucial to have an efficient and consistent definition of what is an aphid cluster before labeling. Ambiguous criteria will confuse deep learning models during training.
In the fields, the aphid clusters can appear in a variety of patterns (low density, high density, different sparsity) as shown in Fig <ref>. If we aim to label each individual aphid regardless of its density, it will take an excessive amount of time and resources will be wasted on areas without critical threat. If the threshold is set too high, areas with substantial aphid infestation might be ignored, resulting in financial loss. After discussions with agricultural experts, we define the aphid cluster as “an area with more than or equal to six closely located aphids". A further interpretation of the threshold is demonstrated in Fig <ref>.
Labeling
After removing some redundant images and images without aphid clusters, we have labeled in total 5,447 photos. Redundant images are those taken from very close viewpoints, resulting in visually similar images. Photos without clusters are removed because we believe deep CNN models can learn sufficient negative features from empty spaces of the remaining photos.
In summary, the statistical information of the generated masks is shown in Fig <ref>. In total, 59,767 masks are created and the sizes of masks vary greatly. 77.0% of the masks have a size smaller than 5,000 pixels. Since masks with larger sizes are rare and sparsely distributed, we only plot the histogram of masks with less than 5,000 pixels in Fig <ref>. More than half of the masks are smaller than 1,500 pixels, with the most popular size interval [201, 301]. Among all masks, the median size is 1,442 pixels and the mean is 7,867 pixels. The median is more representative, while the mean value is severely affected by the extremely large masks.
10-Fold Cross Validation
Cross validation <cit.> is a resampling method to evaluate and pick models on a small dataset. Popular computer vision datasets commonly have more than 10k images, MS COCO <cit.> has more than 200k labeled images. Our dataset only has a little over 5k images. Following cross validation <cit.>, we decide to split our dataset into 10 groups. To ensure each group has a similar percentage of images from the three different views, we separately shuffle the images and split them into 10 subgroups from each view. Then the final cross validation groups are formed by picking one subgroup from each view. Thus images from each view will be evenly distributed in each group.
Image Patches
The majority of the masks, as shown in Fig <ref>, have a size smaller than 1,500 pixels, which is less than 0.015% of the original image size (3,648 × 2,736). In addition, most detection and segmentation models are trained and tested on much smaller images. So we crop the original high-definition images into smaller 400 400 patches. During this process, some masks will be separated into different patches and will have some exclusions. To ensure each mask's completeness in at least one of the final patches, the patch generation is performed with 50% overlapping, meaning the next patch overlaps 50% with the previous patch both horizontally and vertically. An original 3,648 × 2,736 image will generate 221 patches for detection and segmentation.
Patch generation is conducted after dividing the dataset into 10 cross validation groups, such that information from one original photo will not leak to any other groups. Also after patch generation, those patches without an aphid cluster are discarded because CNN models should have enough negative samples just from the background of other patches. In summary, the number of patches in each cross validation group is shown in Table <ref>.
Bounding Box Merge Since we label the aphids based on clusters, small clusters close to each other are labeled individually with well-defined boundaries. However, the generated bounding boxes of these clusters overlap with each other, as shown in Figure <ref>. From the object detection point of view, these bounding boxes should be merged as they all represent aphid clusters. Otherwise, they may cause confusion during learning. In our application, we merge the bounding boxes of the clusters if their closest distance is less than or equal to 10 pixels. Our experiments show that this process will greatly boost detection accuracy.
Tiny Cluster Removal The process of image cropping may create some extremely small clusters, and most of them are around the border or corner of the patches. In practice, these small labels are meaningless for model training and affection estimation. Thus, we remove the small cluster masks whose areas are less than 1% of the patch. The results after merging and removal are illustrated in Table <ref>.
§ OBJECT DETECTION MODELS
In object detection, both classification and localization are required for recognizing and localizing the objects in the videos or images. Typically, the detection models have two separate branches for classification and localization, respectively. The classification branch is similar to most classification tasks which classify the contents included by the bounding boxes. The localization branch predicts the offsets to the anchor boxes for anchor-based detection models or to the anchor points for anchor-free detection models and then the offsets would be converted to the bounding box coordinates based on the anchor boxes or anchor points for final predictions.
Since the IoU thresholds are extremely important for detection models, recent detection models tend to calculate the adaptive thresholds based on the statistical properties among the samples <cit.><cit.> or compute the dynamic thresholds based on the training status <cit.><cit.>.
In this study, we implemented the following four state-of-the-art object detectors and evaluated their performance on aphid detection based on the created dataset. (1) ATSS (Adaptive Training Sample Selection) <cit.> calculates the adaptive IoU thresholds based on the mean and standard deviation of the IoUs between the candidate anchor boxes and the ground truth objects to select the positive samples instead of using fixed thresholds. (2) GFLV2 (Generalized Focal Loss V2) <cit.> utilizes statistics of bounding box distributions as the Localization Quality Estimation (LQE). Thus the high-quality bounding boxes could have a high probability to be kept instead of suppressed with the NMS (Non-Maximum Suppression) algorithm. (3) PAA <cit.> dynamically divides the positive samples and negative samples using GMM (Gaussian Mixture Model) based on the classification and localization scores of the samples in a probabilistic way. (4) VFNet <cit.> is based on ATSS <cit.> algorithm, but proposes IoU-aware Classification Score (IACS) as the classification soft target using the IoUs between the predicted bounding boxes and their corresponding ground truth objects. Thus high-quality predicted boundary boxes might have high scores than those low-quality boxes. In addition, star-shaped box feature representation is introduced to further refine the predicted boxes so that they could be closer to the ground truth objects.
The aforementioned detection models are state-of-the-art approaches that have excellent performance on COCO benchmark <cit.>. Since the labels of our created dataset are based on the aphid clusters instead of the single aphid, we can directly apply them in this problem and train these models using the created dataset.
§.§ Model Training
All models exploit 0.001 as the initial learning rate with the total training epoch being 12. The initial learning rate is utilized for 9 epochs and then reduced by 10 times for the last 3 epochs. SGD (Stochastic Gradient Descent) is employed as the optimizer to optimize the model. The momentum and weight decay are 0.9 and 0.0005, respectively. The batch size is 16 and the warmup iterations are 500. The detection models are written by PyTorch with Python3 <cit.>.
The evaluation metric for detection models is Average Precision (AP) which computes the area under the PR curve. The PR curve plots the Precision rate versus the Recall rate for the detection models. The precision rate indicates the correctly predicted samples over the entire predicted samples. The recall rate represents the correctly predicted samples over the entire ground truth samples.
For object detection, only predicting the correct labels is not enough since we should also consider the bounding box accuracy. Typically, IoUs (Intersection over Unions) between the predicted bounding boxes and their corresponding ground truth boxes are utilized to judge the quality of the predicted boxes. Typically, IoU is calculated by the ratio of the intersection area over the union area of two bounding boxes. PASCAL VOC <cit.> selects 0.5 as the IoU threshold which indicates that the detection is a success if the IoU between the predicted bounding box and the ground truth bounding box is over 0.5 if the classification label is correctly predicted. COCO <cit.> chooses the IoU threshold from 0.5 to 0.95 with the step of 0.05, and calculates the AP for each of the thresholds and finally averages them. In this paper, we utilize the IoU threshold from PASCAL VOC and the generated annotation files are also in xml format as PASCAL VOC <cit.>.
§ RESULTS
In the experiments, we utilize four state-of-the-art detection models for 10-fold cross validation. After the 400×400 patches are cropped from the high-resolution images, the patches are divided into 10 groups for cross validation. For 10-fold cross validation, one fold is employed as testing data and the others are merged as the training data for each validation. Since we label the aphids based on clusters, some small clusters that are close to each other are labeled individually, which compromises the performance of the detection models and is not meaningful in real-world applications. We could simply treat them as a whole if they are close enough to each other. Thus we also pre-process the ground truth labels so that the bounding boxes for small clusters are merged if they are close enough to each other. In our experiments, the bounding boxes of the clusters are merged if their closest distance is less than or equal to 10 pixels, as illustrated in Figure <ref>. By merging close clusters, small clusters might be merged as a whole object so that the detection models could easier recognize those merged clusters. Some small clusters are extremely difficult to be detected even state-of-the-art detectors are utilized and some clusters are densely located and hard to separate individually. Thus merging those clusters that are close to each other might boost the performance of the detection models and be more applicable in real-world scenarios.
In addition, since we crop 400×400 small patches from the original images, some partial clusters that are extremely small might exist at the border of the patches, which contributes nothing to the training process but greatly compromises the accuracy of the testing data.
Thus we also remove the extremely small clusters (i.e. areas that are less than 1% of the patch) after the close clusters are merged. After removing those small clusters that are less than 1% of the area of the patch, the performance is significantly improved for all detection models, which is illustrated in Table <ref>.
The performance across the 10-fold cross validation among all detection models is illustrated in Table <ref>. AP (Average Precision) and Recall are recorded in the format of mean± std. The mean and standard deviation (std) are calculated across all 10-fold validations. In the first row, “original" indicates the detection models are applied to the originally labeled dataset; “+merge 10" illustrates the results after merging the clusters whose closest distance is within 10 pixels; “+rm 0.01" stands for the results after removing the small clusters whose areas are less than 1% of the patches.
We can see from Table <ref> that all detection models achieve similar results in terms of
average precision, however, the recall rate of PAA <cit.> is slightly higher than the other detectors since the number of predicted bounding boxes of PAA is higher than that of other detection models. Both AP and recall rate have been greatly increased after cluster merging (+merge 10) and removal (+rm 0.01).
The above results are obtained with IoU setting to 0.5. We have also tested the influence of other IoU thresholds. In general, lower IoU yields better average accuracy and vice versa. If the locations of the aphid clusters are not concerned, a lower IoU threshold may be applied.
§ DISCUSSION
Using deep learning models to recognize and detect objects in real-world scenarios is popular and successful in recent years. Recognizing and localizing the insects such as aphids are extremely difficult due to the small size of the insects. Moreover, it is not meaningful to detect individual aphids if we intend to eliminate them for agricultural purposes. It is almost impossible to attain perfect pictures to recognize and detect the aphids in real-world fields. Based on the fact that the aphids are frequently densely clustered, detecting them as clusters is much more applicable since they frequently cluster together and it is hard to label or detect them individually. Thus we collected the images from the fields and labeled the aphids based on the clusters.
Labeling aphid clusters is never effortless since the aphid clusters do not have very clear boundaries like common objects such as cars, humans, etc. Although they frequently cluster together, the shapes and sizes of the aphid clusters are irregular and vary significantly from one aphid cluster to another aphid cluster. Thus we label the aphid clusters with masks first, then generate the bounding boxes of the aphid clusters based on the masks, as illustrated in Figure <ref>. From Figure <ref>, the masks of the aphid clusters do not have regular shapes and sizes, thus they might be more difficult to be detected compared to other objects that have regular shapes and sizes. Another advantage of aphid cluster labeling is that the areas of the masks or bounding boxes could more or less represent the severity of the infection for the aphids on the leaves. If the areas of the aphid clusters are large, the leaves may be seriously infected and require some measures of protection.
Labeling aphid clusters is a challenging task due to their irregular shapes and sizes. Thus the initial labeling might not be that excellent and some post-processing is necessary. Merging the bounding boxes of neighboring small clusters is a simple and effective approach to improve the detection models for detecting those clusters since the performance of most state-of-the-art object detectors is not that pleasant for recognizing small objects. The performance of the detection models is improved after the bounding boxes are merged for neighboring clusters, as demonstrated in Table <ref>. Due to cropping, there are some small partial clusters at the border of the cropped patches, which are meaningless for the problem and harmful to model training. Thus removing those small clusters is meaningful for both training and testing. The performance of the detection models is greatly improved after those small clusters are removed, as shown in Table <ref>. With our post-processing of the initially labeled ground truth, the dataset and annotations are more suitable for detection, and commonly used detection models could be directly employed for the dataset and annotations, without any specific modifications for small objects.
In the experiments, we implemented four state-of-the-art detection models which achieve excellent performance on COCO benchmark <cit.>. They have similar results for each fold validation. PAA <cit.> has a slightly higher recall rate than other detection models since it generates more predicted boxes. Some detection results are shown in Figure <ref> with VFNet <cit.> as the detection model. The examples illustrated in Figure <ref> are excellent results for aphid cluster detection. Thus most commonly utilized detection models could be employed for detecting aphid clusters without any specific modifications. However, due to the irregular sizes and shapes of the aphid clusters, some duplicates are often generated around the ground truth clusters so that the number of false positives is increased and the overall accuracy is affected, as illustrated in Figure <ref> for some bad detection results. This is one of the biggest challenges to detect aphid clusters without regular sizes and shapes. The NMS algorithm is employed to eliminate the duplicates in most detection models after the detection results are attained. There is an IoU threshold in NMS algorithm for eliminating the duplicates of high-score results. The NMS algorithm first ranks the confidence scores from high to low, and the result with the highest score would be kept and other results whose bounding boxes have IoUs larger than the threshold would be eliminated. And the next result with the highest score would be picked up from the results left. And the process is repeated until there is no results left. Thus the IoU threshold in NMS would affect the number of results eliminated from the total detection results and the accuracy of other objects that are close to each others. If the threshold is too high, some duplicates might not be eliminated, while if the threshold is too low, duplicates might be removed but some high-score predictions of other neighboring objects would also be removed so that the overall performance is also affected. Some experiments was implemented to illustrate the influence of the IoU thresholds to the performance of the detection models. The default threshold in NMS for all four detection models is 0.6. In the experiments, the threshold is varied from 0 to 1 with 0.1 as the step size, which is demonstrated in Figure <ref>. In Figure <ref>, IoU threshold 0.5 for NMS could yield the best performance for all detection models and all types of annotations. 0.6 is the default IoU threshold for all detection models in the experiments. Thus the performance of the detection models on the aphid dataset could not be largely boosted by varying the IoU threshold in NMS algorithm. How to effectively remove the duplicates of the predictions for the aphid clusters without regular shapes and sizes is left for future work.
§ CONCLUSION
In this paper, we have selected thousands of aphid-affected images from millions of images captured in the fields and annotated the aphids based on clusters instead of individual aphids for generic usage. Due to the irregular shapes and sizes of the aphid clusters, the initial annotations might not be suitable for the training of learning models. Thus, we merge the bounding boxes of the neighboring clusters and remove the extremely small clusters. We have also evaluated and compared the performance of four state-of-the-art detectors and created a baseline of aphid detection using the created dataset. The experiments have also demonstrated the effectiveness of cluster merging and removal. The created dataset and trained detection models could be used to help farmers to estimated the aphid infestation levels in the field so as to provide timely and precise pesticide application. We hope our created dataset and analysis could inspire more work on aphid detection.
§ ACKNOWLEDGMENTS
This work was partly supported in part by NSERC (RGPIN-2021-04244) and USDA (2019-67021-28996).
|
http://arxiv.org/abs/2307.04436v1 | 20230710092159 | Full event simulation of Photoproduction at NLO QCD in Sherpa | [
"Peter Meinzinger"
] | hep-ph | [
"hep-ph"
] |
=6.0in
=8.25in
=-0.3in =-0.20in
#1
#1
#1
#1
#1
#1
and
#1
Submitted to #1
Abstract
Presented
PRESENTED AT
Bibliography.bib
Born
0.2ex++
Full event simulation of Photoproduction at QCD in
Peter Meinzinger
Institute for Particle Physics Phenomenology,
Durham University, Durham DH1 3LE, UK
Photoproduction is an important mode for the production of jets and electro-weak particles at lepton–lepton and lepton–hadron colliders and allows for interesting studies of exclusive production at hadron–hadron colliders. In this talk, I will review recent efforts of extending the event generator to include the calculation of photoproduction cross sections for electron and proton beams, including the simulation of underlying events. The framework is validated using data of jet production at the and experiments and lepton production at the LHC. I will discuss advances towards achieving matched accuracy and fully capturing the dynamics of inclusive and exclusive photoproduction at different colliders.
DIS2023: XXX International Workshop on Deep-Inelastic Scattering and
Related Subjects,
Michigan State University, USA, 27-31 March 2023
< g r a p h i c s >
§ INTRODUCTION
The cross section of jet production at lepton–hadron or lepton–lepton collider experiments is dominated by the exchange of a virtual photon. While, in particular at the latter, this is well understood at large photon virtualities, the descriptive power of the theoretical calculations deteriorates with decreasing virtuality <cit.>. This has been reflected in decomposing the full cross section into electro- and photoproduction where the latter is identified with a regime where the photon is quasi-real and has to be seen as the incoming particle.
Simulating these events needs a different approach than the typical DIS processes. Here we report on the implementation of relevant physics and its validatation in .
§ SIMULATION IN
§.§ Photon flux
As the electron decouples from the hard interaction in the scattering, the flux of the quasi-real photons has to be calculated. In the Weizsäcker-Williams approximation <cit.> the cross section is calculated as
dσ_e p → e^' + 2j + X = σ_γ p → 2j + X(x, s)|_Q^2=0 dn(x) ,
where the electron momentum can be reconstructed from the photon and the photon virtuality is integrated out in the equivalent photon flux dn, leaving only the maximum virtuality Q^2_max as a free parameter, which has to be determined by the experimental setup and by the considered process.
For the measurements considered in this study, the photon flux for electron beams includes a mass-dependent correction as proposed in <cit.>:
dn(x) = α_em/2 πdx/x[ [ 1 + (1 - x)^2 ] log( Q^2_max/Q^2_min) +
2 m_e^2 x^2 ( 1/Q^2_min - 1/Q^2_max) ]
Here, x is the fraction of the photon momentum with respect to the electron momentum, m_e is the electron mass and Q_min/max are the minimum and maximum photon virtualities, where the former is given by kinematic constraints as Q^2_min = m_e^2 x^2/1 - x.
§.§ Parton distributions in the photon
Initial State Radiation off the photon cannot be neglected in photoproduction of jets, necessitating the inclusion of the resolved photon component in the calculation, i.e. its hadronic structure. Hence <cit.>:
dσ_γ p → 2 j + X = dσ_γ p → 2 j + X^(hl) + dσ_γ p → 2 j + X^(pl) , with
dσ_γ p → 2 j + X^(hl) = ∑_ij∫dx f_i/γ(x, μ_F^') f_j/p(x, μ_F) dσ̂_ij (p_γ, x p_p, α_S, μ_R, μ_F, μ_F^')
dσ_γ p → 2 j + X^(pl) = ∑_j ∫dx f_j/p(x, μ_F) dσ̂_γ j (p_γ, x p_p, α_S, μ_R, μ_F, μ_F^') ,
where the superscripts stand for the hadron- and point-like photon respectively, the f_i/A are the parton distribution functions (PDFs) related to finding parton i in particle A, the μ_F, R are the factorisation and renormalisation scales, and p are the momenta.
The photon PDF obeys an evolution slightly different to hadronic PDFs, due to the presence of a QED splitting kernel, leading to
∂ f_i/γ/∂logμ^2 = α_S/2 π∑_j P_ij⊗ f_j/γ + α_em/2 π P_iγ
with P the splitting kernels and where the first term is the usual QCD evolution and the latter the QED evolution stemming from a photon splitting into two quarks.
Photon PDFs from Glück-Reya-Vogt <cit.>, Glück-Reya-Schienbein <cit.>, Slominski-Abramowicz-Levy <cit.>, and Schuler-Sjöstrand <cit.> have been included in .
As exemplified in a comparison between two PDF sets (the SAS1D paramterisation by Schuler-Sjöstrand and the set by Slominski-Abramowicz-Levy) in Fig. <ref>, there are large deviations, especially in the gluon distribution function.
The distinction of direct and resolved processes can not be maintained at Next-to-Leading-Order (NLO) due to the ambiguity of real emissions. While the resolved-photon processes can be computed at analogously to jet production in p-p collisions, the direct-photon processes show divergences stemming from the photon splittings P_iγ. However, in <cit.> it was shown that these divergences cancel against the resolved-photon cross-section as these splittings are re-absorbed into the PDF by means of the inhomogenous term proportional to P_iγ in the evolution equation. Hence, these divergences can be subtracted from dσ_γ p → 2 j + X^(pl) and care only has to be taken to use a photon PDF with the correct evolution and the same factorisation scheme as in the matrix element generation. The calculation can then be matched to the parton shower with the prescription. The main difference lies in the fact that momentum fractions have to be calculated with respect to the variable photon energies instead of fixed beam energies.
§ VALIDATION
For validation, the simulation has been compared to data from the and colliders, namely photoproduction of one or two jets at the ZEUS, OPAL and L3 experiments. Typical observables in these analyses are the (average) jet transverse energy E_T, pseudo-rapidity η, cosΘ^*, which approximates the angle between two jets and x_γ^±, which is defined as
x_γ^± = ( ∑_j=1,2 E^(j)± p_z^(j)) / ( ∑_i∈ hfs E^(i)± p_z^(i) )
and works as a proxy to experimentally distinguish the direct from the resolved modes.
In Fig. <ref> we studied at LO, where all PDF sets could be used, the uncertainties from the different PDF parametrisations and found significant deviations, in agreement with the large discrepancies in the parton distributions. This underlines the need for a new fit to the available data and a more thorough study of the parton distribution of the real and quasi-real photon. Overall, the simulation shows good agreement with the data within the uncertainties. The results at , cf. Figs. <ref> and <ref>, were generated as an average over the SAS1M and SAS2M pdf sets, which use the scheme.
§ OUTLOOK
§.§ Minimum Bias photoproduction for the LHC
Multiple-parton interactions are non-negligible in photoproduction <cit.> and the implementation based on <cit.> has been extended to also cover parametrisations of γ p and γγ interactions.
One object of study could be the simulation of Minimum Bias events where interactions are not only allowed between the two proton beams, but also the photon–proton and photon–photon systems to examine systems with rapidity gaps at the LHC.
When studying semi-diffractive processes, e.g. at the LHC, the LUXqed PDF can be used to access both the elastic and the dissociative contributions to the photoproduction processes.
§.§ Diffractive photoproduction and pomeron exchange
The diffractive production of jets is often understood in terms of a pomeron exchange which is factorized into a pomeron flux and a pomeron parton distribution. At the factorisation was observed to break down, so there's ongoing interest to understand this phenomenon <cit.>, especially in view of the upcoming Electron-Ion Collider.
The implementation of the pomeron flux is work in progress in .
§ SUMMARY
We showed progress in to include photoproduction at various colliders and achieving matched accuracy in QCD. The validation has been done at LO and is ongoing for . We also discussed several ideas how to extend the framework further and use it in experimental studies at the LHC and the EIC.
|
http://arxiv.org/abs/2307.04382v1 | 20230710072704 | Experimental verification of bound and multiparticle entanglement with the randomized measurement toolbox | [
"Chao Zhang",
"Yuan-Yuan Zhao",
"Nikolai Wyderka",
"Satoya Imai",
"Andreas Ketterer",
"Ning-Ning Wang",
"Kai Xu",
"Keren Li",
"Bi-Heng Liu",
"Yun-Feng Huang",
"Chuan-Feng Li",
"Guang-Can Guo",
"Otfried Gühne"
] | quant-ph | [
"quant-ph"
] |
These authors contributed equally to this paper.
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
These authors contributed equally to this paper.
Peng Cheng Laboratory, Shenzhen 518055, China
Institut für Theoretische Physik III, Heinrich-Heine-Universität Düsseldorf, Universitätsstr. 1, 40225 Düsseldorf, Germany
Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Str. 3, 57068 Siegen, Germany
Fraunhofer Institute for Applied Solid State Physics IAF, Tullastr. 72, 79108 Freiburg, Germany
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
Peng Cheng Laboratory, Shenzhen 518055, China
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
[email protected]
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
[email protected]
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
[email protected]
Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Str. 3, 57068 Siegen, Germany
In recent years, analysis methods for quantum states based on randomized measurements have been investigated extensively. Still, in the experimental implementations these methods were typically used for characterizing strongly entangled states and not to analyze the different families of multiparticle or weakly entangled states. In this work, we experimentally prepare various entangled states with path-polarization hyper-entangled photon pairs, and study their entanglement properties using the full toolbox of randomized measurements. First, we successfully characterize the correlations of a series of GHZ-W mixed states using the second moments of the random outcomes, and demonstrate the advantages of this method by comparing it with the well-known three-tangle and squared concurrence. Second, we generate bound entangled chessboard states of two three-dimensional systems and verify their weak entanglement with a criterion derived from moments of randomized measurements.
Experimental verification of bound and multiparticle entanglement with the randomized measurement toolbox
Otfried Gühne
August 12, 2023
==========================================================================================================
§ INTRODUCTION
Quantum entanglement is one of the most prominent non-classical features of
quantum mechanics and often viewed as a resource in quantum information processing <cit.>. Its generation and characterization is of growing interest from both practical and fundamental perspectives. While deciding whether a given quantum state is entangled or not is in general a hard task <cit.>, many experimentally feasible schemes exist that verify entanglement in some states.
A prominent example for such schemes are entanglement witnesses, which allow for rather simple detection of entanglement using few measurements, whereas other schemes detect non-locality by evaluating some Bell-type inequalities <cit.>. On the experimental side, numerous entangled states have been generated and multi-qubit entanglement <cit.>, high-dimensional
entanglement of two particles <cit.>, and also bound entanglement <cit.> has been characterized.
When applying the standard criteria in a practical experiment, however, one always needs to align the local measurement settings strictly or to make some assumptions
on the target state to prepare, e.g., by tailoring a witness specifically for
states close to some fixed target state. To remedy this, several schemes based
on the moments of randomized correlations have been proposed
<cit.>. They provide an efficient way to characterize multi-particle correlations in states without prior knowledge about the state, nor any alignment of measurement directions. Recently, it has been shown that this approach also allows for the detection of bound entanglement <cit.>.
In this paper, we implement in a photonic setup the randomized measurement scheme to detect entanglement in mixtures of three-qubit GHZ and W-states using second moments of the random outcomes. Furthermore, we prepare bound entangled chessboard states of two qutrits and show their entanglement by evaluating an
entanglement criterion which is based on the second and fourth
moment of a randomized measurement outcome, without implementing the
random unitaries explicitly. This demonstrates that the criterion
from Ref. <cit.> is indeed strong enough to capture this weak form of entanglement, even in the presence of noise and experimental imperfections. Our implementation combines the photon's polarization and path degrees of freedom to generate precisely controlled high-dimensional states and demonstrates the versatility and efficiency of the randomized measurement approach.
§ THEORY
In the randomized measurement scheme <cit.>, a subset S⊂{1,…,n} of the parties of an n-partite quantum state ρ of fixed local dimension d is measuring some fixed, local observables in random directions. The moments of the distribution of measurement results can be written as
ℛ_S^(t) = ∫dU_1 …dU_n ⟨ U_1 τ_1 U_1^†⊗…⊗ U_n τ_n U_n^†⟩_ρ^t,
where the τ_i denote the local observables, and τ_i = whenever i∉ S.
The integrals are evaluated over the Haar measure of the unitary group 𝒰(d).
In case of qubit systems, one usually chooses τ_i = σ_z for i∈ S, in which case the second moments (t=2) are related to the purities of the reduced states of ρ. The sum of second moments for all subsets S of size | S| = k is proportional to what is known as the k-sector length of the state <cit.>. In particular, for three qubits the sector lengths A_k are given by
A_1 =3(ℛ_A^(2)+ℛ_B^(2)+ℛ_C^(2)),
A_2 =9(ℛ_AB^(2)+ℛ_AC^(2)+ℛ_BC^(2)),
A_3 =27ℛ_ABC^(2).
Decomposing ρ in terms of the local Pauli basis {σ_0 = , σ_1 = σ_x, σ_2 = σ_y, σ_3 = σ_z}, yields
ρ_ABC=1/8∑_i,j,k=0^3 α_ijkσ_i⊗σ_j⊗σ_k
and allows to express the sector lengths in terms of the coefficients α_ijk as follows:
A_1 = ∑_i=1^3 (α_i00^2 + perm.),
A_2 = ∑_i,j=1^3 (α_ij0^2 + perm.), and
A_3 = ∑_i,j,k=1^3 α_ijk^2.
In terms of the sector lengths, several entanglement criteria exist that detect certain entangled states. To proceed, let us recall that a three-particle state ρ_ABC is called biseparable for a partition A|BC if
ρ_A|BC
= ∑_k q_k^A ρ_k^A ⊗ρ_k^BC,
where the positive coefficients q_k^A form a probability distribution.
Similarly, the biseparable states ρ_B|CA and ρ_C|AB can be defined.
Moreover, we can consider the mixture of biseparable states for all partitions as
ρ_bisep
= p_A ρ_A|BC + p_B ρ_B|CA + p_C ρ_C|AB,
where p_A, p_B, p_C are probabilities.
A quantum state is called genuinely multipartite entangled (GME) if it cannot be written in the form of ρ_bisep.
For three-qubit states, if A_3>3, the state must be GME (the maximal value being A_3=4 for the GHZ state |GHZ⟩=1/√(2)(|000⟩+|111⟩).
A stronger version exists, which states that if
A_2 + A_3 > 3(1+A_1),
the state cannot be biseparable w.r.t. any fixed partition, and strong numerical evidence exists that in that case, even GME states must be present <cit.>.
In this paper, we aim to detect entanglement in a mixture of a GHZ and a W state, given by
ρ(g) = g|GHZ⟩⟨GHZ|+(1-g)|W⟩⟨W|,
where g∈[0,1] denotes the amount of mixing and |W⟩ = 1/√(3)(|001⟩ + |010⟩ + |100⟩).
The family of states ρ(g) exhibits some interesting properties. First, it is supported in the symmetric subspace. This implies that
F_XYρ(g) = ρ(g)F_XY = ρ(g),
where
F_XY = ∑_i,j|ij⟩⟨ji|_XY
is the flip (swap) operator acting on the subsystems XY∈{AB, BC, CA}.
It is known that if a state lives in
the symmetric subspace, it is either fully separable or GME
<cit.>.
However, the experimentally generated version of the state ρ(g) cannot be assumed to have the symmetry due to experimental imperfections. Accordingly, the generated state can become biseparable, thus, we employ the criterion in Eq. (<ref>) to detect its entanglement.
We stress again that the criterion in Eq. (<ref>) has been conjectured to imply the presence of GME from numerical evidence, but its analytical proof has not yet been provided <cit.>.
That is, even if the criterion Eq. (<ref>) is verified experimentally, the state may be entangled for any fixed partition, but it can be a mixture of at least three biseparable states for different bipartitions.
Second, when the parameter g is outside the region of 0.297 ≤ g ≤ 0.612, the criterion in Eq. (<ref>) is satisfied.
This parameter region is very close to other well-known regions using two other entanglement measures <cit.>.
On the one hand, the three-tangle τ vanishes for 0≤ g≤ g_τ≈ 0.627, where τ measures residual (three-partite) entanglement that cannot be expressed as two-body entanglement <cit.>.
Note that the GHZ state maximizes the three-tangle, while it vanishes for the W state.
On the other hand, the sum of squared concurrences C_A|B^2 + C_A|C^2 vanishes for g_C ≈ 0.292 …≤ g≤ 1,
where the concurrence C_X|Y measures bipartite entanglement in the reduced state between the parties X and Y <cit.>.
Hence, we can conclude that the criterion in Eq. (<ref>) can detect the multi-partite entanglement of ρ(g) even in regions where the three-tangle and the concurrence vanish, if the parameter g satisfies
g_C ≤ g < 0.297 or
0.612 < g ≤ g_τ.
In contrast to qubit systems, the second moments of higher-dimensional states are not automatically related to sector lengths. In fact, the choice of the local observables influences which local unitary invariants can be extracted from the moments <cit.>. Let us expand a bipartite quantum state of dimension d in terms of some local, hermitian operator basis {λ_i}_i=0^d^2-1 with λ_0 =, (λ_iλ_j) = dδ_ij, such as the Gell-Mann basis <cit.>. Then
ρ = 1/d^2[⊗ + ∑_i=1^d^2-1 (α_i λ_i ⊗ + β_i ⊗λ_i) + ∑_i,j=1^d^2-1T_ijλ_i ⊗λ_j]
is called the generalized Bloch decomposition of ρ, where the matrix T is known as the correlation matrix of ρ. For this matrix, many entanglement criteria exist, most notably the de Vicente criterion <cit.>, stating that for separable states, (| T|) ≤ d-1. While the left-hand side is not directly accessible from the moments of randomized measurements, it is possible to obtain related quantities by carefully choosing the observables τ_i as detailed in Ref. <cit.>, such that
ℛ^(2)_AB=tr(TT^†)/(d-1)^2
ℛ^(4)_AB=[1/3tr(TT^†)/(d-1)^2+2/3tr(TT^† TT^†)]/(d-1)^4.
For example, for d=3, τ_i = diag(√(3/2), 0, -√(3/2)).
The combined knowledge of these two quantities allows to detect entanglement, whenever it is incompatible with the de Vicente criterion, i.e., if the measured value of ℛ^(4)_AB is below the minimum given by
min ℛ^(4)_AB
s.t. ℛ^(2)_AB = measured, tr(|T|)≤ d-1.
Note that this lower bound can also be calculated analytically <cit.>.
Interestingly, there exist states which have a positive partial transpose, but can be detected to be entangled by these two moments, implying bound entanglement. A 3× 3-dimensional state from the chessboard family of bound entangled states described in Ref. <cit.> (also see Appendix C2 in <cit.>) has been identified to violate it extremely, which makes it a good candidate to prepare and detect its entanglement experimentally. It is given by
ρ_ch=N∑_i=1^4|V_i⟩⟨V_i|,
where N=1/∑_i⟨V_i|V_i⟩^2=1/4
is a normalization factor and
|V_1⟩=1/√(6)(|0⟩+2|2⟩)|0⟩+1/√(6)|11⟩,
|V_2⟩=1/√(6)(-|0⟩+2|2⟩)|1⟩+1/√(6)|10⟩,
|V_3⟩=1/√(6)|0⟩(-|0⟩+2|2⟩)+1/√(6)|11⟩,
|V_4⟩=1/√(6)|1⟩(|0⟩+2|2⟩)+1/√(6)|01⟩.
§ EXPERIMENTAL SETUP
We proceed with a description of the experimental implementation. The GHZ-W mixed states are prepared by resorting to the states entangled in polarization degree of freedom (d.o.f.) and path d.o.f. of the photon (that is, hyper-entangled) and with methods similar to the ones in Refs. <cit.>. More detailed information about the state preparation of this family of states is given in Appendix A.
When preparing the bound entangled chessboard state, it is important to ensure that all its eigenvalues remain non-negative under partial transposition. However, the chessboard state is not of full rank. Affected by the imperfections of
the experiment, slightly negative eigenvalues of the partial transposition are likely to appear. A more robust way is to prepare
the state with a level of white noise <cit.>,
ρ_ch(p)=(1-p)ρ_ch+p𝕀16.
First, let us briefly review the state preparation procedure.
As depicted in Fig. <ref>, we generate polarization
entangled (2×2 entangled) photon pairs through a spontaneous
parametric down-conversion (SPDC) process. Subsequently, we expand the dimensionality of the system by introducing the path modes u and l. This will results in three modes: H_u, V_u, and H_l, where H_u represents a horizontally polarized photon occupying path u, and so on. Finally, specific operations are applied to the system to steer the state to the target ones.
Specifically, a Half-Wave Plate (HWP) H1 with the optic axis placed at 12.05^∘ is used to rotate a 390 nm horizontally polarized pump laser (with an 80 MHz repetition rate and a 140-fs pulse duration) to state |ψ_p⟩=√(5/6)|H⟩+√(1/6)|V⟩, where H and V represent the horizontal and the vertical polarization, respectively. The pump photon is then split into two photons after pumping two crossed-axis type-I β-Barium Borate (BBO) crystals in the SPDC process, transforming the state into |ψ_p⟩→√(5/6)|HH⟩+√(1/6)|VV⟩. By passing through the Beam Displacers (BDs) BD1 and BD2, the down-converted photons' H-(V-) components are directed to path u (l). And for path mode u, we have the mode labeled as H_u and V_u. By re-encoding |H⟩_u→|0⟩, |V⟩_l→|1⟩, and |V⟩_u→|2⟩, we obtain the hyper-entangled state |ψ_s⟩=√(5/6)|H_uH_u⟩+√(1/6)|V_lV_l⟩→√(5/6)|00⟩+√(1/6)|11⟩.
It is worth noting that all the four states |V_i⟩ in Eq. (<ref>) can be generated by performing local
operations on the state |ψ_s⟩,
|V_1⟩=U_2⊗𝕀|ψ⟩,
|V_2⟩=U_3⊗ U_1|ψ⟩,
|V_3⟩=𝕀⊗ U_3|ψ⟩, |V_4⟩=U_1⊗ U_2|ψ⟩,
where
U_1= (
0 1 0
1 0 0
0 0 1
),
U_2= (√(1/5) 0 √(4/5)
0 1 0
√(4/5) 0 -√(1/5) ),
U_3= (
-√(1/5) 0 √(4/5)
0 1 0
√(4/5) 0 √(1/5) ).
For the states |V_3⟩ and |V_4⟩, it also works by applying the unitary U_3⊗𝕀, and U_2⊗ U_1, respectively, and then exchanging the labels for the two detectors D1 and D2. Therefore, through performing the operator U_3 or U_2 on one photon of a pair and the operator U_1 or 𝕀 on the other photon simultaneously, the state |ψ_s⟩ will be transformed to each of the four states |V_i⟩. The switches between these operators are implemented by the motorized rotating HWPs and Quarter-Wave Plates (QWPs), which are controlled by the pseudo-random numbers generated from a classical computer. Two adjustable LED lights are placed before the detectors to introduce the different levels of white noise into the system.
In the measurement part, a QWP and an HWP located at path u are used to analyze the correlations between basis elements |0⟩ and |2⟩, and now the afterward BD works as a Polarization Beam Splitter (PBS). When measuring the superposition of basis elements |0⟩ and |1⟩, as well as |2⟩ and |1⟩, we first convert the path d.o.f. to the polarization d.o.f. via the wave plates and BDs, and then analyze with the combination of the QWP and the HWP. Detailed settings of the wave plates for standard quantum state tomography are given in Tab. <ref> of Appendix B. For each measurement basis, we randomly change the photon states to every one of the four states |V_i⟩. The two-photon coincidence counts are recorded per 10 s.
When it comes to measuring the randomized correlations, as elaborated in the theoretical framework, two distinct approaches are considered. The first one involves conducting local randomized measurements, while the second entails the direct application of Pauli operators or Gell-Mann matrices. In this study, we thoroughly examine and contrast these two methodologies for three-qubit states, utilizing a LabVIEW program to facilitate the automation of numerous measurements. Further details regarding the randomized measurement techniques can be found in the Appendix C. For the bound entangled states, we opt to directly measure the 81 combinations of Gell-Mann matrices to avoid the systematic errors that may emerge from the construction of 3× 3 random unitaries.
§ RESULTS
§.§ Results for the GHZ-W mixed states
In our experiment, a set of GHZ-W mixed states ρ(g) with step size 0.05 is prepared. For each state, 4000 measurements in randomized directions are performed, and for each measurement, about 5300 copies of the state are detected.
The entanglement criterion of Eq. (<ref>) is calculated from the randomized measurement data with the error bars obtained by repeating the whole process ten times. From the results in Fig. <ref>(a), we see that for 0≤ p≤ 0.2 and 0.7≤ p≤ 1, the criterion in Eq. (<ref>) is violated, while the criterion A_3-3≤ 0 is not. Clearly, Eq. (<ref>) improves the previous one.
Note that the sector length A_k can also be expressed in terms
of the coefficients α_ijk, and then compared with
the randomized measurements. Resorting to the standard
quantum state tomography process, we obtain the density matrix of the GHZ state ρ_GHZ^exp and W state ρ_W^exp, respectively.
The values of the criterion of Eq. (<ref>) are calculated from the state ρ(g)=gρ_GHZ^exp+(1-g)ρ_W^exp and plotted as the dashed red lines in Fig. <ref>(a) and (b).
In contrast, for the ideal states, we have (A_1, A_2, A_3)=((1-g)^2/3, 8g^2-8g+3, 4g^2+11(1-g)^2/3), and the theoretical values of the criteria are shown as the solid red lines in Fig. <ref>.
We see that the results deduced from randomized measurements and from the coefficients α_ijk are approximately identical, providing evidence for the correct implementation of the randomized measurements. In the region 0.08≤ g≤ 0.24 and 0.67≤ g≤ 0.88, where the criterion A_3-3≤ 0 fails, we detect genuinely multi-partite entanglement. Furthermore, from
Fig. <ref>(b), we see that our criterion still works for g≤ 0.24 in the violet color region where the states have no three-tangle and also for g≥0.67 in the light salmon region where they exhibit no squared concurrence.
§.§ Results for the chessboard state
The experimentally prepared chessboard state ρ_ch^exp is reconstructed using the maximum-likelihood algorithm. Due to imperfections, when no white noise is added, the minimal eigenvalues of the partially transposed (PT) density matrix is -0.0133, such that state is not PPT and probably not bound entangled. To remove these negative eigenvalues, we introduce different levels of white noise between p=0 and p=0.22 in the experiment, and plot the minimum PT eigenvalue and the violation of the entanglement criterion in Eq. (<ref>) in Fig. <ref>. In particular, for the state with noise level p=0.1291, the minimum PT eigenvalue equals 0.0026±0.0009 and the fidelity between the experimentally prepared state ρ_ch^exp and the the noisy chessboard state ρ_ch(p=0.1291) is given by F(ρ_ch,ρ_ch^exp)=tr(√(√(ρ_ch)ρ_ch^exp√(ρ_ch)))=0.9893± 0.0012.
Next, we show that the state is entangled by using the tool of the second and fourth moments. For the state under consideration at p=0.1291, the second moment is given by ℛ^(2)_AB=0.2355±0.0015, and the fourth moment by ℛ^(4)_AB=0.0259±0.0003, while for separable states, the lower bound on the fourth moment is given by 0.0277 for ℛ^(2)_AB=0.2355 when performing the optimization program in Eq. (<ref>). We see that the experimental value 0.0259 is smaller than the lower bound 0.0277 and violates it with 6 standard deviations. Therefore, we experimentally prepared a 3×3 bound entangled state with the photonic platform and analyzed its entanglement property via the second and fourth moments
successfully.
§ CONCLUSION
We experimentally produced a variety of genuinely entangled photonic states consisting of entangled photon pairs amended with path degrees of freedom and characterized them using methods based on locally randomized measurements. First, we showed how to generate genuinely entangled states of three parties and verified them using entanglement criteria based only on the second moments of the randomized measurements. The latter enabled the verification of mulitpartite entanglement in regimes where well-known measures of multipartite entanglement, i.e., the three-tangle or the squared concurrence, are zero. Further on, we demonstrated the production of weakly bound entangled chessboard states of two qutrits and used entanglement criteria based on the second and fourth moments of the taken randomized measurements to analyze the produced states. As a result, bound entangled states with mixed-state fidelities beyond 98% were successfully produced and verified.
Our work demonstrates the outstanding control of quantum states
in photonic setups and presents an efficient way for preparing a
low-rank bound entangled state. By incorporating appropriate white
noise, the setup demonstrates increased robustness against
transitioning into the free entangled region. Compared with several previous experiments, the precise control allowed us to directly verify
bipartite bound entanglement in minimal case of a 3×3 system,
without resorting to the various forms of bound entanglement in higher dimensions or in multiparticle systems. This will facilitate further exploration of interesting entanglement effects in
experiments.
§ ACKNOWLEDGEMENTS
We thank Xiao-Dong Yu for discussions. The work in USTC is supported by the National Natural Science Foundation of China (Nos. 11821404, 11734015, 62075208), the Fundamental Research Funds for the Central Universities (Nos. WK2030000061, YD2030002015), and the Innovation Program for Quantum Science and Technology (No. 2021ZD0301604). Y.Z. is support by the Major Key Project of PCL. S.I. and O.G. are supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, project numbers 447948357 and 440958198), the Sino-German Center for Research Promotion (Project M-0294), the ERC (Consolidator Grant 683107/TempoQ), and the German Ministry of Education and Research (Project QuKuK, BMBF Grant No.
16KIS1618K). S.I. acknowledges the support from the DAAD. N.W. acknowledges support by the QuantERA project QuICHE via the German Ministry of Education and Research (BMBF
Grant No. 16KIS1119K).
§ APPENDIX A: EXPERIMENTAL DETAILS ON THE PREPARATION OF THE GHZ-W MIXED STATES
In our experiment, the GHZ-W mixed states are prepared using the setup shown in Fig. <ref>, and the switch between the GHZ state and W state is realized by engineering the polarization-entangled photon source (EPS), and the subsequent unitary transformations constituted by Beam Displacers (BDs) and the Half-Wave Plates (HWPs). First, for the GHZ state, a polarization-entangled state |ψ_s⟩=1/√(2)(|HH⟩+|VV⟩)|l⟩ is generated through the type-I Spontaneous Parametric Down-Conversion (SPDC) process, and |u⟩ (|l⟩) in Fig. <ref> represents the path u (path l). Then, BD1 makes the vertically polarized part of the light passes through directly to path l, while the horizontal component passes with a 4 mm deviation to path u. That is to say, the BD1 performs as a CNOT gate with the polarizations as the controlled qubit and the path as the target qubit. When we set the angles of the half-wave plates H4∼H5 as 0^∘ and H6∼H7 as 45^∘, we get |ψ_s⟩→1/√(2)(|HH⟩|u⟩+|VV⟩|l⟩). By encoding the H (u) and V (l) to the logic qubit 0 and 1, we prepare the system into the three qubit GHZ state |GHZ⟩=1/√(2)(|000⟩+|111⟩).
When it comes to the W state, the EPS is tuned to the state |ψ_s⟩=1/√(3)|VH⟩|l⟩+√(2/3)|HV⟩|l⟩ by rotating the polarization directions of the pump beam to |ψ_p⟩=1/√(3)|H⟩+√(2/3)|V⟩ and performs a bit flip operation on one of each paired photon generated in the SPDC process. Now the angle of H4 is placed at -67.4^∘ and the one of H5 at 45^∘ to transform the state |V⟩|l⟩ to 1/√(2)(|V⟩|u⟩+|H⟩|l⟩), and |ψ_s⟩→1/√(3)|VH⟩|u⟩+1/√(3)|H⟩|V⟩|u⟩+1/√(3)|H⟩|H⟩|l⟩. With re-encoding, the W state |W⟩=1/√(3)(|100⟩+|010⟩+|001⟩) is generated.
At last, various states ρ(g)=g|GHZ⟩⟨GHZ|+(1-g)|W⟩⟨W| are generated by randomly switching the settings of the setup to produce state |GHZ⟩ or |W⟩, with probabilities g and 1-g, respectively.
In the measurement stage, the combination of a Quarter-Wave Plate (QWP), an HWP, and a Polarization Beam Splitter (PBS) enables the polarization state measurement in an arbitrary basis. Thus, the two polarization encoded qubits are analyzed with the devices boxed as parts (a) and (b), respectively. Here BD3 combined with H8 performs as a PBS with only one output port, so we must rotate Q2 and H2 twice to realize the projective measurements {U|0⟩⟨0|U^†, U|1⟩⟨1|U^†}. The third qubit, i.e., the path qubit, is transformed to the polarization degree of freedom, and then analyzed by wave plates Q3, H3, and PBS2 in the boxed part (c).
To facilitate the massive randomized measurements, i.e., 40,000 sets for each state ρ(g) in our experiment, the QWPs Q1∼Q3 and HWPs H1∼H3 are all mounted in Motorized Rotation Mounts (Newport, CONEX-PR50CC). For each local measurement setting drawn uniformly at random, a classical computer inputs the corresponding settings of the QWP and HWP and controls the wave plates automatically rotated to the target angles to perform the measurement. This entire process is executed via a LabVIEW program.
Here the quality of the state ρ(g) depends heavily on the GHZ state and the W state, so we give the benchmarks of these two states through quantum state tomography. We estimate the fidelities of the experimentally prepared state and the ideal state F(ρ^ideal,ρ^exp)=(tr√(√(ρ^ideal)ρ^exp√(ρ^ideal))) are 0.9919 and 0.9890 for GHZ state and W state, respectively. The real parts of the experimentally prepared state are shown in Fig. <ref>. All fidelities of the GHZ-W mixed states shown as the dots in Fig. <ref> are above 0.9836, which shows the good performance of the setup. The error bars are of the size of about 0.0001, which is obtained with Monte Carlo simulations by sampling the experimentally collected data.
§ APPENDIX B: QUANTUM STATE TOMOGRAPHY FOR THE CHESSBOARD STATE
As the red points in Fig. <ref> show, various noisy chessboard states ρ_ch(p) are prepared to study their entanglement properties. Here, the level of white noise p is estimated by comparing the total coincidence counts with the counts recorded when no white noise source is added, i.e., when the LED lights in Fig. <ref> are turned off. For instance, if we record a total of photonic counts N_p for state ρ_ch(p) and N_0 for state with no added white noise, then p is set to the value of 1-N_0/N_p.
To characterize the chessboard state that we prepared experimentally, we perform a standard quantum state tomography process, where the 81 vectors
|u_i⟩⊗|u_j⟩ (i, j=0,1,...8) are measured. The detailed forms of the kets |u_i⟩ are given by
|u_0⟩=|0⟩;
|u_1⟩=|1⟩;
|u_2⟩=|2⟩;
|u_3⟩=(|0⟩+|1⟩)/√(2);
|u_4⟩=(|0⟩+i|1⟩)/√(2);
|u_5⟩=(|1⟩+|2⟩)/√(2);
|u_6⟩=(|1⟩+i|2⟩)/√(2);
|u_7⟩=(|0⟩+|2⟩)/√(2);
|u_8⟩=(|0⟩+i|2⟩)/√(2).
Each basis is realized with the settings in Tab. <ref>.
We get the fidelities 0.9835±0.0005, 0.9838±0.0006, 0.9853±0.0005, 0.9893±0.0012, 0.9911±0.0005, 0.9930±0.0003 for states of p=0,0.052,0.0991,0.1291,0.1573,0.2158, respectively. The error bars are estimated with Monte Carlo simulations by sampling the experimental data 100 times.
§ APPENDIX C: ENTANGLEMENT DETECTION FOR THREE-QUBIT STATES WITH RANDOMIZED MEASUREMENTS
In our work, we use the criterion based on the second moment,
ℛ_S^(2) = ∫dU_1 …dU_n ⟨ U_1 τ_1 U_1^†⊗…⊗ U_n τ_n U_n^†⟩_ρ^2,
to study the entanglement property of the three-qubit state ρ(g), where τ_i=σ_z for i∈ S and τ_i= for i∉ S.
As each observable τ_i is measured in the standard basis |0⟩ and |1⟩, we will sort the detection outcomes into eight categories corresponding to the eight basis states M_ABC={|000⟩⟨000|, |001⟩⟨001|, |010⟩⟨010|, |011⟩⟨011|,
|100⟩⟨100|, |101⟩⟨101|, |110⟩⟨110|, |111⟩⟨111|}, respectively. In every single trial, instead of preparing the state ρ_U=Uρ(g) U^† and then making measurements in the standard basis, we directly perform the measurements U^† M_ABCU on the state ρ(g) in our experiment, where U=U_A⊗ U_B⊗ U_C. These two ways are equivalent to each other.
For each choice of local unitaries, we prepare N copies of the state to estimate the probability distributions of the outcomes, and a total of M random unitaries are applied to form the average over local unitaries.
We note that given the observable τ_i we choose, there are only two possible outcomes X_i∈{1, -1} for τ_ABC=τ_1⊗τ_2⊗τ_3. We define the probability for each outcome as p_i, which can be obtained by summing up the probabilities that correspond to the same measurement outcomes. As an example, consider the moment ℛ_A^(2), then τ_1=σ_z, τ_2=, and τ_3=, the outcomes assigned to the eight basis states M_ABC are 1,1,1,1,-1,-1,-1,-1, respectively. We get the probabilities p_1=
p_000+p_001+p_010+p_011 and p_2=p_100+p_101+p_110+p_111, where {p_1, p_2} represents the probability distribution for outcomes {1, -1}, and p_000=⟨ 000|ρ_U|000⟩ etc.
Next, we need to construct the unbiased estimator for Tr(ρ Uτ_ABCU^†)^2. For N independent trials, we get the unbiased estimator p_i=N_i/N so that 𝔼[p_i]=p_i, where N_i are the number of events with measurement outcome X_i. Also, we can find the unbiased estimators p_i^2 and p_ip_j such that 𝔼[p_i^2]=p_i^2 and 𝔼[p_ip_j]=p_ip_j:
p_i^2=N(p_i)^2-p_i/N-1
p_ip_j=N/N-1p_ip_j.
We get the unbiased estimator for E^2=Tr(ρ_Uτ_ABC)^2 via
E^2=∑_i X_i^2p_i^2+2∑_i<jX_iX_jp_ip_j.
For each of the M local unitaries and the observable τ_ABC, we have
E^2=N(p_1)^2-p_1/N-1+N(p_2)^2-p_2/N-1-2N/N-1p_1p_2.
After averaging over all the randomly chosen local unitaries, we get the estimate of the moments R_S^(2) as
R_S^(2)=1/M∑_i^M E^2
Finally, we combine the second estimates for the same size |S|=k to get the k-sector length of the state and plug it into the criterion to perform the entanglement analysis.
|
http://arxiv.org/abs/2307.04584v1 | 20230710142620 | Porous CrO$_2$: a ferromagnetic half-metallic member in sparse hollandite oxide family | [
"Sujoy Datta"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
[email protected], [email protected]
Department of Physics, University of Toronto, 60 Saint George Street, Toronto, Ontario M5S 1A7, Canada
A stable polymorph of CrO_2 is predicted using PBE+U method. The porous material is isostructural with α-MnO_2 making it the second transition metal oxide in sparse hollandite group of materials. However, unlike the anti-ferromagnetic semiconducting character of the α-MnO_2, it is found to be a ferromagnetic half-metal. At Fermi level, the hole pocket has ample contribution from O-2p orbital, though, the electron pocket is mostly contributed by Cr-3d_xy and Cr-3d_x^2-y^2. A combination of negative charge transfer through orbital mixing and extended anti-bonding state near Fermi level is responsible for the half-metallic ferromagnetic character of the structure. A comparative study of rutile and hollandite CrO_2 and hollandite MnO_2 structures delineate the interplay between structural, electronic and magnetic properties. The material shows a robust magnetic character under hydrothermal pressure, as well as, the band topology is conserved under uniaxial strain. Moderate magneto-crystalline anisotropy is observed and it shows a correspondence with the anisotropy of elastic constants.
Porous CrO_2: a ferromagnetic half-metallic member in sparse hollandite oxide family
Sujoy Datta
August 12, 2023
====================================================================================
§ INTRODUCTION
In the field of material science, studies on transition metal oxides (TMO) is prolific. However, TMOs never get tired to amaze the scholars through their versatile character. Not only that, even the burgeoning development in first-principles electronic-structure schemes faces the challenge of describing the underlying Physics of TMOs. Along with dynamical mean free theory (DMFT), inclusion of Hubbard parameters are mostly successful approaches.
Though TMOs are found to form solid-state structure of various symmetries, some structures are known for their uniqueness. The porous layered hollandite structure of α-MnO_2 is such an example <cit.>. The 2×2 tunnel within this structure can accommodate additional atomic species (e.g., Pb^ 2+ , B^2+, K^+ , etc.) while producing <cit.>. Such additional species can tune the electronic and magnetic properties as wells. A good number of publications in this field is the testimonial to the importance of study on this structure <cit.>. However, till date, there is no report on any other TMO exhibiting similar crystal structure.
In transition metal family, Chromium is one of its kind. Even at room temperature, is is found to be antiferromagnetic, making it the only single elemental anti-ferromagnetic (AFM) solid <cit.>. It shows a valency in the range of [+1 to +6] and oxidize to form CrO, Cr_2O_3, CrO_2 and CrO_5. Experimentally, two phases of Chromium dioxides have been characterised, the rutile type (namely α-, or, r-) and orthogonal CaCl_2 type (namely β-, or, o-). There is a second order phase transition observed from rutile to CaCl_2 type phase at 12-17 GPa <cit.>. Some other dynamically stable phases have also been proposed theoretically though hollandite structure have not been predicted yet <cit.>. Also, the porous nature prone to accommodate other ions may hinder in synthesis of the material.
Transition metals are characterised by the electrons in their d-orbital. In its 4+ valance state, Cr ion has two 3d electrons and a vacant 4s shell. These two 3d electrons should reside at two t_2g orbitals. Strong on-site correlation between these two electrons should result in Mott type insulating nature, however, this is not the case. Rutile type Chromium dioxide is a ferromagnetic half-metal at its ground state <cit.>. An explanation of the half-metallic ferromagnetic character has been provided in terms of the double-exchange model <cit.>. This makes r- a negative charge transfer insulating materials in the Zaanen-Sawatzky-Allen (ZSA) scheme <cit.>. The O-2p bands crossing the Fermi energy works as a charge (electron/hole) centre to nullify the strong electron-electron correlation between Cr-3d electrons through the hybridisation. Recently the topological character of rutile and orthorhombic type are investigated. It is shown that type-I and type-II Weyl fermions, can emerge in these phases of chromium dioxide <cit.>.
With the technological advancement, energy storage is a bottleneck yet to be cleaned. Holey structures are efficacious candidate in storing Lithium, Sodium, Zinc or other ions prospective to battery material <cit.>. Both Chromium and oxygen are abundant in nature, so, h- can be a good alternative in this field. Furthermore, half-metallic ferromagnets are playing a critical role modern spintronics devices; from magnetic sensors, spin valves to computer hardware components including such as magneto-resistive random-access memory (MRAM), read head of magnetic hard drive, etc. <cit.>.
In view of the rare ferromagnetic nature of CrO_2 in the TMO family, an investigation on the structures and their local bonding characteristics is indispensable. The aim of this article is to use the latest reliable theoretical approaches to demonstrate the mechanical, electronic and magnetic character of the proposed hollandite polymorph. To analyse and interpret the complicated electronic structures of TMOs, introduction of Hubbard term for both of on-site and off-site electron-electron interaction proved to be an efficacious tool <cit.>.
A detailed side by side investigation of the three materials, already synthesised r-, h-MnO_2, and the proposed h- can bridge the gap of theoretical understanding on such materials with rare structure as well as electronic and magnetic properties. Such trustworthy theoretical predictions will facilitate experimentalists a better handle on choosing their materials for a desired application, out of a plethora of possibilities.
§ COMPUTATIONAL DETAILS
We have used the Vienna Ab-initio Simulation Package (VASP) package for density functional theoretical (DFT) calculations <cit.>. Projected augmented wave (PAW) basis enabled pseudo-potentials using Perdew-Burke-Ernzerhof (PBE) <cit.> and PBE for solids (PBEsol) <cit.> generalised gradient approximated (GGA) exchange-correlation (xc) functional have been used. The optimised structures are found through ionic and volume relaxations using GGA and GGA+U methods starting from different magnetic configurations. We have set the threshold of maximum force as 10^-5 Ry./atom and pressure threshold as 10^-5 Kbar/cell. The convergence criteria for energy and charge densities has been set as 10^-8 Ry. We have chosen kinetic-energy cut-off 520 Ry. Fine reciprocal-space grids of dimensions 8×8×8, 5×5×8, 5×5×8 are used for holandite, rutile and orthorhombic structures, respectively.
The electron configurations of Cr and O have been taken as [Ar]4s^2 3d^4 and [He]2s^2 2p^4. As 3d electrons of transition metals correlate more strongly than the limit of GGA, for proper description of strong electron-electron correlation, Hubbard U and Hund J terms have been used. While U provides inter-orbital Coulomb interaction, U-J takes care of the interorbital Coulomb interaction between electrons.
Over the years, several non-empirical approaches have been proposed to estimate Hubbard parameters, such as constrained DFT, constrained random phase approximation (cDFT/cRPA) and linear-response formulation <cit.>. These terms for r- have been calculated by constrained screening method by Korotin <cit.>. We have used these values for : U=3.0 and J=0.87. Using self-consistent linear-response theory within the framework of DFPT, evaluation of the Hubbard parameters have been introduced recently <cit.>. Hitherto it is successfully predicted electronic structure of diverse materials using on-site U and inter-site V parameters<cit.>. For calculation of U and V, Quantum Espresso (QE) package is used <cit.>. For MnO_2, U=5.87 is used <cit.>.
The Vesta package has been utilised to simulate X-ray diffraction (XRD) pattern for Cu_α radiation <cit.>. For pre and post-processing Vaspkit and ElATools are utilised <cit.>. The chemical bonding analysis has been done using Lobster package for PAW <cit.>.
§ STRUCTURE AND MECHANICAL PROPERTIES
The hollandite <cit.> CrO_2 follows body-centred tetragonal lattice having I4/m (82) crystal symmetry. The optimised lattice constants are found to be a=b=9.992Å and c=2.702Å calculated using PBE for spin unpolarised calculation. As depicted in Fig.<ref> (a-c), the Cr atoms coordinate with six neighbouring oxygen atoms forming edge-sharing CrO_6 octahedra. Such MO_6 type structure is also found in the rutile and is a common building block for many covalently bonded hard materials <cit.>. A 2×2 tunnel is formed in between the CrO_6 octahedras. Including Hubbard terms U=3.0 and J=0.87 with PBE exchange-correlation the optimised lattice parameters are calculated as a=b=9.880Å and c=2.978Å <cit.>. So, there is a relative underestimate of the volume by 7.23% by the unpolarised calculation. Using same Hubbard parameters and PBEsol the lattice constants are found as a=b=9.767Å, c=2.928Å.
In Fig.<ref>(a-c), the conventional unit cell of h- crystal having eight formula units are shown from different angles. The primitive cell used for electronic structure calculations is presented in Fig.<ref>(d). The primitive cell contains four formula units. Now, for experimental identification of any crystal structure, XRD spectra is the key. Here in Fig.<ref>(e), we provide the XRD spectra for Cu_α radiation. Reflection from (1,1,0), (1,1,1), (2,2,1) and (1,0,-1) crystallographic planes create the most prominent sharp peaks of XRD, which represents the signature character of the particular structure.
Experimentally two polymorphs of are prepared so far, the rutile type α-(or r-) and orthorhombic β-(or o-). The lattice constants for both these structures predicted using PBEsol+U are agreeing with the experimental findings (see, <ref>).
The hollandite structure of MnO_2 is known as α-MnO_2. The calculated lattice constants for conventional unit cell using PBEsol+U a=b=9.787Å and c=2.903Å matches well with the reported experimental values a=b=9.750Å and c=2.861Å <cit.>.
Elastic properties: Within elastic limit, according to Hook law, the stress (σ_i) and external strain (e_j) follow a linear relationship:
σ_i = ∑_i,j=1^6 C_ij e_j
, where, C_ij is the elastic stiffness tensor. The orthorhombic system has nine independent components in the 6×6 matrix. The rutile structure being a part of type-I tetragonal system possesses six independent components. The hollandite structure which falls under type-II tetragonal class possesses seven independent components. Beside the stress-strain relationship, the elastic tensor can also be calculated from the total energy (E) using the harmonic approximation as:
C_ij=1/V_0∂^2 E/∂ e_i ∂ e_j
, where, V_0 is the volume without any stress. The values of C_ij are tabulated in Table <ref>.
While all other structures are experimentally reported, h- is not been synthesised yet. Hence, a mechanical stability check is indispensable. Following the Born stability criteria extended to different crystal classes, the necessary and sufficient conditions for mechanical stability (Eq. <ref>) are satisfied by all the structures are <cit.>:
Orthorhombic: C_11, C_44, C_55, C_66 > 0; C_11C_22 > C_12^2
C_11C_22C_33 + 2C_12C_13C_23 - C_11C_23^2 - C_22C_13^2 - C_33C_12^2 > 0
Tetragonal-I: C_11> |C_12|, C_44, C_66 > 0
2C_13^2 < C_33(C_11+C_12)
Tetragonal-II: C_11> |C_12|; C_44 > 0
2C_13^2 < C_33(C_11+C_12); 2C_16^2 < C_66(C_11-C_12)
Beside the elastic stability of the proposed material, its dynamical stability test is necessary. As the vibrational modes (phonons) generate due to the relative kinetics of ions, for a dynamical stable structure, there should be no negative phonon mode. The phononic energy dispersion in Fig.<ref>(a) delineate the dynamical stability. Chromium ions (Cr^4+) are much heavier than oxygen ions (O^-2), therefore, the lower energy phononic modes are populated by the contribution from Cr. Optical modes of higher frequency is mostly contributed by the kinetics of Oxygen ions.
The resistance against the external compression reflects in the bulk modulus of material. Using the Voigt-Reuss-Hill methodology the bulk moduli (B_H) is calculated <cit.>. A set of separate calculation has been carried out to find the variation of energy with changing volume. The equation of state (EOS) fittings for those data provide another set of bulk moduli. In Table-<ref> B_V represent the bulk moduli calculated using Vinet EOS <cit.>. The calculated B_V using PBEsol+U for r- is almost exactly matching with the experimental value <cit.>. From literature, the theoretically predicted values of the bulk modulus for r- are found as 261 GPa <cit.>, 282 GPa <cit.>, 225 GPa <cit.>, 238 GPa <cit.>. Now, there is a mismatch of theoretical and experimental bulk moduli of o- structure (216.75 vs. 181). Maddox have suspected that as the orthorhombic phase is not found in ambient pressure, the zero pressure volume can not be measured experimentally <cit.>. Therefore, an effect of inadequate data may results in the EOS prediction for the phase.
Along with the Young moduli (Y), the sheer moduli (G), Poisson ratios (ν), Pugh ratios (B_H/G) are calculated using the same Voigt-Reuss-Hill methodology and are tabulated in Table-<ref>. relative to the orthorhombic and rutile structures, the hollandite structure has large vacuum present in between the CrO_6 polyhedra giving more space to accommodate the the deformation produced by external strains. As a result, B, Y and G values of of the h- phase are much less than other polymorphs. The 2×2 tunnel is also present in h-MnO_2, so, the values of elastic moduli come out to be almost similar to those ones of h-.
As for tetragonal class of materials the elastic constants follow the relation: C_12=C_21, hence, the variation of bulk moduli in the XY plane for both h- and h-MnO_2 are isotropic in nature, whereas, in ZX or ZY plane they show elliptical variation as depicted in Fig.<ref> (a). The eccentricity of the variation in ZY plane is lower for h-, while, the bulk modulus in XY plane is higher for the same. In contradiction to h-, YZ plot of bulk modulus of h-MnO_2 shows a dip at Z=0. The variation of compressibility in Fig.<ref>(c) is a confirmation of the fact that higher bulk modulus yields less compressibility. According to the Young moduli in Fig. <ref>(b), h- is less stiff along Z direction than h-MnO_2. The anisotropy of the elastic moduli can be visually confirmed through Fig.<ref>(d-e). In XY plane, the pattern of Young modulus shows it is hard to deform the shape of the 2×2 tunnel in h-. As h-MnO_2 is found AFM ground state, its resistance against the displacement of Mn atoms is more robust.
§ ELECTRONIC STRUCTURE
The electronic structure and magnetic behaviour of has always been a curious case. While it is more likely for an TMO to be found in anti-ferromagnetic character, is found to be ferromagnetic in its ground state. For anti-ferro spin configuration we have taken three different arrangements as depicted in Fig.<ref>(a-c). The EOS plot confirms the ferromagnetic ground state of h-. The AFM-1 state is 30 meV/atom higher in energy that the FM configuration. The AFM-1 and AFM-2 states are very close in energy and for ambient pressure it is showing a crossover. However, the ferromagnetic ground state remains stable for ambient pressure which indicates robustness of magnetic response. h- MnO_2 which have the similar structure is found to be in anti-ferro ground state with the spin distribution as in Fig.<ref>(c) <cit.>. Interestingly, for h- this AFM-3 configuration possesses highest energy.
The h- is a ferromagnetic material where the nature of bands for up and down spins are quite different. The metallic nature of one spin (up) is contradicted by the semiconductor nature for the other spin (down) channel making the h- a half-metal. The electronic band dispersion and the density of states (DOS) near the Fermi level (E_F) is depicted in Fig.<ref>. The majority spin bands crosses E_F, while, the down spin channel shows a gap. The half-metallic bandgap is found to be 2.9 eV. There is no state over -0.85 eV and below 2.07 eV relative to E_F in minority spin channel. The half-metallic gap is indirect on the Z_0-M line of irreducible Brillouin Zone (BZ) edge (see, Fig.<ref>). For majority spin channel, in conduction band, first four bands are separated from the other conduction. The pseudo-gap is of 0.88 eV from 1.56 eV at Γ to 2.38 eV at M w.r.t. E_F. An interesting feature is that the eigenvalues for all the bands at Z are equal with those at Z_0, so, the diagonally opposite points of upper surface of the irreducible BZ are equivalent.
The DOS for majority spin channel is shifted lower than the minority spin channel. Shifting of DOS is an well-known sign of ferromagnetic character of material as well as the dissimilarity of DOS represents the ferromagnetic strength. For , the dissimilarity is vividly noticeable, so, the found magnetic moment of 2μB per formula unit is quite justified.
Electronic bonding analysis can provide more insight of the ferromagnetic character of h- <cit.>. For localised basis set the atomic orbital overlap is straight-forward to calculate, so, the overlap population weighted DOS (crystal orbital overlap population, COOP) can provide the information on the nature of bonding, anti-bonding, or non-bonding interaction. For DFT calculation involving plane wave basis, crystal orbital Hamiltonian populations (COHP) is a method that partitions band energies into pairwise atomic orbital interactions which facilitating similar identification. There is no anti-bonding state below E_F for down spin, whereas, for up spin, the anti-bonding state starts -1.78 eV. Such extended anti-bonding state is generated from Cr-3d and O-2p interaction which becomes clear from the orbital weighted bands in Fig.<ref>. The bonding anti-bonding is similar in rutile phase as well (see, Fig.<ref>)
To get a closer look on the orbital contribution on the full spaghetti of bands, the orbital weighted bands are plotted in Fig.<ref>. The lowest lying bands 40eV below E_F are originating from the the s and p orbitals of Cr, and, the majority spin bands are lower in energy than the minority spin bands. The O-2s bands are also separated below 20 eV. Near E_F, in valence band, the most contribution is coming from the O-p orbitals. For majority spin channel, the Cr-3d_xy bands are separately visible. There is a small electron pocket at M which is visible along X-M-Γ and Z_0-M line. The pocket is conduced mostly from Cr-3d_xy and Cr-3d_x^2-y^2. Also, there is an hole pocket along X-P with most contribution coming from hybridised Cr-3d_xz and O-2p_x. The band creating the hole pocket is more flat than the band responsible for electron pocket indicating heavier hole than electron at E_F.
From the partial DOS plot for Cr and O we have already noticed that the conduction bands are formed by Cr orbitals and the valence bands are mostly coming from O orbitals. Cr and O orbital mixing is better experienced for up spin. Now, r- is understood to be a negative charge transfer gap material <cit.>. Korotin have shown that a almost pure O-2p band crosses the Fermi energy which acts as a electron/hole reservoir causing fractional occupation of Cr-3d band at E_F. In Fig.<ref>(a) we have presented the oxygen weighted bands for both rutile and hollandite structure. In both cases, there is a Cr-3d band below E_F contributing mostly from 3d_xy orbital. However, in hollandite structure, it is not as much a separate unhybridised band, so, Cr-3d_xy orbital in h- is not as localised as in rutile counterpart. In r-, in the vicinity of BZ centre Γ, the band crossing E_F holds its pure O-2p character (brown dots) though hybridizes with Cr-3d near Z. So, hybridisation is much anisotropic in r- for the said band. In h- there are two bands responsible for the metallic character, the band crossing E_F around M has almost pure Cr-3d character and the band crossing along X-P is a hybridised Cr-d_xz and O-2p_x band. In both polymorphs, the band mixing between Cr-3d and O-2p is accountable for the half-metallic ferromagnetic character, though, the nature of hybridisation is different.
Along Z_0-M near E_F two bands are almost touching each other, one with strong Cr-d_xz another with O-2p_z orbital contribution. This makes us curious if uniaxial pressure along z-axis can bring some fundamental change of band topology (see, Fig.<ref>(b)). With 1% pressure, the bands come very close though having a gap of the order of meV. With higher pressure, the bands come close at two points though never cross each-other, hence, maintaining the band-topology conserved always. However, more detailed study on the topological aspect on this material has to be explored in future.
Magneto-crystalline anisotropy: h- possesses a rare porous hollandite crystal structure and belongs to a scarce ferromagnetic TMO family. It would be interesting to see how the crystal structure tunes the direction of magnetic moment in this system. Magneto-crystalline anisotropy is a phenomenon manifested by the variation of internal energy depending on the direction of magnetization in any material. As the orbit is strongly coupled to the crystal structure (lattice), so, changing the orientation of spin is resisted through spin-orbit coupling giving rise to MAE <cit.>. The spatial variation of MAE is presented in Fig.<ref>. The three dimensional plot is showing an isotropy in X-Y plane (azimuthal independence). Here to mention that the bulk modulus for this structure has also shown azimuthal symmetry. This is a feature of its layered structure. However, tilting the spin through making an angle θ with Z-axis (easy axis) demands external energy. The MAE variation with θ shows that the hardest axis lies on X-Y plane with highest value of MAE=394.96 μ Ev.
§ CONCLUSION
α-MnO_2 structure is one of its kind with a 2×2 tunnel to accommodate foreign elements which can be found to be useful in different applications. We have predicted another TMO having a similar structure, yet drastically different electronic and magnetic character. Even, it demonstrates contrasting mechanical and electronic behaviour than other Chromium dioxides. A detailed side-by-side study reveals the underlying Physics of such a vibrant character.
The system is mechanically and dynamically stable. It exhibits anisotropic elastic moduli and is less stiff than the similar h-MnO_2 structure. The lattice parameters and elastic constants calculated for hollandite h-MnO_2 and rutile r- are at par with the experimental values exhibiting the reliability of methodology utilised for the proposed h- crystal.
The h- is a half-metallic ferromagnet having a half-metallic bandgap of 2.9 eV. The ferromagnetic nature is a result of strong hybridisation of Cr-3d and O-2p electrons. The band crossing is minimal with only one electron pocket around M conduced mostly from Cr-3d_xy and Cr-3d_x^2-y^2, and one hole pockets along X-P with most contribution coming from hybridised Cr-3d_xz and O-2p_x at Fermi level. The occurrence of electron-hole pocket is at different k-points, so, the direct electron-hole coupling is not possible. However, the prospect of phonon mediated superconductivity is yet to be investigated.
The crystal shows ample magneto-crystalline anisotropy with the easy axis perpendicular to the plane of the structure (Z-axis) with highest value of MAE=394.96 μ Ev, while the azimuthal angle independence of MAE is evident. Though there are two bands almost touching together near Fermi level, the band topology remains conserved upon uniaxial pressure. However, the transformation of bands near Fermi energy with minimal uniaxial pressure may pave the way for further detailed study on the topological aspects of this material.
To sum up, we can conclude that a new stable transition metal oxide with vibrant physical properties is proposed which may further ignite the interest of Physics community.
100
bystrom1950
A. Byström, A. M. Byström, The crystal structure of hollandite, the
related manganese oxide minerals, and α-mno2, Acta Crystallographica
3 (2) (1950) 146–154.
luo2010
J. Luo, H. Zhu, J. Liang, G. Rao, J. Li, Z. Du, Tuning magnetic properties of
α-mno2 nanotubes by k+ doping, The Journal of Physical Chemistry C
114 (19) (2010) 8782–8786.
li2007
L. Li, Y. Pan, L. Chen, G. Li, One-dimensional α-mno2: trapping
chemistry of tunnel structures, structural stability, and magnetic
transitions, Journal of Solid State Chemistry 180 (10) (2007) 2896–2904.
cockayne2012
E. Cockayne, L. Li, First-principles dft+ u studies of the atomic, electronic,
and magnetic structure of α-mno2 (cryptomelane), Chemical Physics
Letters 544 (2012) 53–58.
tseng2015
L.-T. Tseng, Y. Lu, H. M. Fan, Y. Wang, X. Luo, T. Liu, P. Munroe, S. Li,
J. Yi, Magnetic properties in α-mno2 doped with alkaline elements,
Scientific reports 5 (1) (2015) 1–8.
wang2009
Y. Wang, H. Liu, X. Sun, I. Zhitomirsky, Manganese dioxide–carbon nanotube
nanocomposites for electrodes of electrochemical supercapacitors, Scripta
Materialia 61 (11) (2009) 1079–1082.
marcus1998
P. Marcus, S. Qiu, V. Moruzzi, The mechanism of antiferromagnetism in chromium,
Journal of Physics: Condensed Matter 10 (29) (1998) 6541.
maddox2006
B. Maddox, C. Yoo, D. Kasinathan, W. Pickett, R. Scalettar, High-pressure
structure of half-metallic cr o 2, Physical Review B 73 (14) (2006) 144111.
kuznetsov2006
A. Y. Kuznetsov, J. De Almeida, L. Dubrovinsky, R. Ahuja, S. Kwon, I. Kantor,
A. Kantor, N. Guignot, High-pressure synthesis and physical properties of an
orthorhombic phase of chromium dioxide, Journal of applied physics 99 (5)
(2006) 053909.
bendaoud2019
H. Bendaoud, K. Obodo, B. Bouhafs, Predicted dynamically stable new phase for
cro2 compound: Dft+ u calculations, Computational Condensed Matter 21 (2019)
e00400.
kim2012
S. Kim, K. Kim, C.-J. Kang, B. Min, Pressure-induced phonon softenings and the
structural and magnetic transitions in cro2, Physical Review B 85 (9) (2012)
094106.
huang2018
S. Huang, X. Wu, J. Niu, S. Qin, Structural, magnetic and electronic properties
of cro 2 at multimegabar pressures, RSC advances 8 (43) (2018) 24561–24570.
schwarz1986
K. Schwarz, Cro2 predicted as a half-metallic ferromagnet, Journal of Physics
F: Metal Physics 16 (9) (1986) L211.
korotin1998
M. Korotin, V. Anisimov, D. Khomskii, G. Sawatzky, Cro2: A self-doped double
exchange ferromagnet, Physical Review Letters 80 (19) (1998) 4305.
katsnelson2008
M. Katsnelson, V. Y. Irkhin, L. Chioncel, A. Lichtenstein, R. A. de Groot,
Half-metallic ferromagnets: From band structure to many-body effects, Reviews
of Modern Physics 80 (2) (2008) 315.
kulatov1990
E. Kulatov, I. Mazin, Extended stoner factor calculations for the half-metallic
ferromagnets nimnsb and cro2, Journal of Physics: Condensed Matter 2 (2)
(1990) 343.
zaanen1985
J. Zaanen, G. Sawatzky, J. Allen, Band gaps and electronic structure of
transition-metal compounds, Physical review letters 55 (4) (1985) 418.
wang2018
R. Wang, Y. Jin, J. Zhao, Z. Chen, Y. Zhao, H. Xu, Ferromagnetic weyl fermions
in cro2, Physical Review B 97 (19) (2018) 195157.
tompsett2013
D. A. Tompsett, M. S. Islam, Electrochemistry of hollandite α-mno2:
Li-ion and na-ion insertion and li2o incorporation, Chemistry of Materials
25 (12) (2013) 2515–2526.
li2012
L. Li, C. Nan, J. Lu, Q. Peng, Y. Li, α-mno2 nanotubes: high surface
area and enhanced lithium battery properties, Chemical communications 48 (55)
(2012) 6945–6947.
attema2005
J. J. Attema, L. Chioncel, C. Fang, G. A. de Wijs, R. de Groot, Half-metals:
Challenges in spintronics and routes toward solutions, Local-Moment
Ferromagnets: Unique Properties for Modern Applications (2005) 199–216.
irkhin1994
V. Y. Irkhin, M. I. Katsnel'son, Half-metallic ferromagnets, Physics-Uspekhi
37 (7) (1994) 659.
yuasa2018
S. Yuasa, K. Hono, G. Hu, D. C. Worledge, Materials for spin-transfer-torque
magnetoresistive random-access memory, MRS Bulletin 43 (5) (2018) 352–357.
keen2002
D. A. Keen, Disordering phenomena in superionic conductors, Journal of Physics:
Condensed Matter 14 (32) (2002) R819.
himmetoglu2014
B. Himmetoglu, A. Floris, S. De Gironcoli, M. Cococcioni, Hubbard-corrected dft
energy functionals: The lda+ u description of correlated systems,
International Journal of Quantum Chemistry 114 (1) (2014) 14–49.
vasp
G. Kresse, J. Furthmüller, Efficient iterative schemes for ab initio
total-energy calculations using a plane-wave basis set, Physical review B
54 (16) (1996) 11169.
PBE
J. P. Perdew, K. Burke, M. Ernzerhof, Generalized gradient approximation made
simple [physical review letters 77, 3865 (1996)], Physical Review Letters 78
(1997) 1396–1396.
PBEsol
J. P. Perdew, A. Ruzsinszky, G. I. Csonka, O. A. Vydrov, G. E. Scuseria, L. A.
Constantin, X. Zhou, K. Burke, Restoring the density-gradient expansion for
exchange in solids and surfaces, Physical review letters 100 (13) (2008)
136406.
csacsiouglu2011
E. Şaşıoğlu, C. Friedrich, S. Blügel, Effective
coulomb interaction in transition metals from constrained random-phase
approximation, Physical Review B 83 (12) (2011) 121101.
pickett1998
W. Pickett, S. Erwin, E. Ethridge, Reformulation of the lda+ u method for a
local-orbital basis, Physical Review B 58 (3) (1998) 1201.
timrov2018
I. Timrov, N. Marzari, M. Cococcioni, Hubbard parameters from
density-functional perturbation theory, Physical Review B 98 (8) (2018)
085127.
timrov2021
I. Timrov, N. Marzari, M. Cococcioni, Self-consistent hubbard parameters from
density-functional perturbation theory in the ultrasoft and
projector-augmented wave formulations, Physical Review B 103 (4) (2021)
045141.
ricca2020
C. Ricca, I. Timrov, M. Cococcioni, N. Marzari, U. Aschauer, Self-consistent
dft+ u+ v study of oxygen vacancies in srtio 3, Physical review research
2 (2) (2020) 023313.
paul2023
B. Paul, D. Mondal, D. Bhattacharya, S. Datta, M. Kundu, I. Mondal, P. Halder,
S. Sarkar, A. Ghosh, T. Mandal, et al., Transition metal impregnated
nanostructured oxide material for broadband electromagnetic interference
shielding: A theoretical and experimental insight, Chemical Engineering
Journal 459 (2023) 141560.
QE
P. Giannozzi, O. Baseggio, P. Bonfà, D. Brunato, R. Car, I. Carnimeo,
C. Cavazzoni, S. De Gironcoli, P. Delugas, F. Ferrari Ruffino, et al.,
Quantum espresso toward the exascale, The Journal of chemical physics
152 (15) (2020) 154105.
vesta
K. Momma, F. Izumi, Vesta: a three-dimensional visualization system for
electronic and structural analysis, Journal of Applied crystallography 41 (3)
(2008) 653–658.
elatools
S. Yalameha, Z. Nourbakhsh, D. Vashaee, Elatools: A tool for analyzing
anisotropic elastic properties of the 2d and 3d materials, Computer Physics
Communications 271 (2022) 108195.
vaspkit
V. Wang, N. Xu, J.-C. Liu, G. Tang, W.-T. Geng, Vaspkit: A user-friendly
interface facilitating high-throughput computing and analysis using vasp
code, Computer Physics Communications 267 (2021) 108033.
nelson2020
R. Nelson, C. Ertural, J. George, V. L. Deringer, G. Hautier, R. Dronskowski,
Lobster: Local orbital projections, atomic charges, and chemical-bonding
analysis from projector-augmented-wave-based density-functional theory,
Journal of Computational Chemistry 41 (21) (2020) 1931–1940.
miura1986
H. Miura, The crystal structure of hollandite, Mineralogical Journal 13 (3)
(1986) 119–129.
sun2019
S. Sun, X. Zhang, J. Cui, Q. Yang, S. Liang, High-index faceted metal oxide
micro-/nanostructures: a review on their characterization, synthesis and
applications, Nanoscale 11 (34) (2019) 15739–15762.
chen2012
Z. Chen, Z. Jiao, D. Pan, Z. Li, M. Wu, C.-H. Shek, C. L. Wu, J. K. Lai, Recent
advances in manganese oxide nanocrystals: fabrication, characterization, and
microstructure, Chemical Reviews 112 (7) (2012) 3833–3855.
hill1952
R. Hill, The elastic behaviour of a crystalline aggregate, Proceedings of the
Physical Society. Section A 65 (5) (1952) 349.
vinet1986
P. Vinet, J. Ferrante, J. Smith, J. Rose, A universal equation of state for
solids, Journal of Physics C: Solid State Physics 19 (20) (1986) L467.
born1955
M. Born, K. Huang, M. Lax, Dynamical theory of crystal lattices, American
Journal of Physics 23 (7) (1955) 474–474.
mouhat2014
F. Mouhat, F.-X. Coudert, Necessary and sufficient elastic stability conditions
in various crystal systems, Physical review B 90 (22) (2014) 224104.
voight1928
W. Voight, Lehrbuch der kristallphysik, Teubner, Leipzig (1928).
reuss1929
A. Reuß, Berechnung der fließgrenze von mischkristallen auf grund der
plastizitätsbedingung für einkristalle., ZAMM-Journal of Applied
Mathematics and Mechanics/Zeitschrift für Angewandte Mathematik und
Mechanik 9 (1) (1929) 49–58.
vinet1987
P. Vinet, J. Ferrante, J. H. Rose, J. R. Smith, Compressibility of solids,
Journal of Geophysical Research: Solid Earth 92 (B9) (1987) 9319–9325.
wu2012
H. Wu, Y. Chen, C. Deng, X. Su, Pressure-induced phase transition and
structural properties of cro2, Phase Transitions 85 (8) (2012) 708–717.
alptekin2015
S. Alptekin, Pressure-induced phase transition in cro2, Journal of molecular
modeling 21 (2015) 1–5.
dronskowski2004
R. Dronskowski, Itinerant ferromagnetism and antiferromagnetism from the
perspective of chemical bonding, International journal of quantum chemistry
96 (2) (2004) 89–94.
daalderop1990
G. Daalderop, P. Kelly, M. Schuurmans, First-principles calculation of the
magnetocrystalline anisotropy energy of iron, cobalt, and nickel, Physical
Review B 41 (17) (1990) 11919.
sander2004
D. Sander, The magnetic anisotropy and spin reorientation of nanostructures and
nanoscale films, Journal of Physics: Condensed Matter 16 (20) (2004) R603.
*
1cm
§
|
http://arxiv.org/abs/2307.04522v1 | 20230710124559 | Accretion Flow Properties of EXO 1846-031 During its Multi-Peaked Outburst After Long Quiescence | [
"Sujoy Kumar Nath",
"Dipak Debnath",
"Kaushik Chatterjee",
"Riya Bhowmick",
"Hsiang-Kuang Chang",
"Sandip K. Chakrabarti"
] | astro-ph.HE | [
"astro-ph.HE"
] |
Dipak Debnath
[email protected]
[email protected]
0000-0002-6640-0301]Sujoy Kumar Nath
Indian Center for Space Physics, 466 Barakhola, Netai Nagar, Kolkata 700099, India
0000-0003-1856-5504]Dipak Debnath
Institute of Astronomy Space and Earth Science, AJ 316, Sector II, Salt Lake, Kolkata 700091, India
Institute of Astronomy, National Tsing Hua University, Hsinchu 300044, Taiwan
0000-0002-6252-3750]Kaushik Chatterjee
South Western Institute for Astronomical Research, Yunnan University, University Town, Chenggong, Kunming 650500, P. R. China
Institute of Astronomy Space and Earth Science, AJ 316, Sector II, Salt Lake, Kolkata 700091, India
Institute of Astronomy, National Tsing Hua University, Hsinchu 300044, Taiwan
0000-0002-7658-0350]Riya Bhowmick
Indian Center for Space Physics, 466 Barakhola, Netai Nagar, Kolkata 700099, India
0000-0002-5617-3117]Hsiang-Kuang Chang
Institute of Astronomy, National Tsing Hua University, Hsinchu 300044, Taiwan
Department of Physics, National Tsing Hua University, Hsinchu 300044, Taiwan
0000-0002-0193-1136]Sandip K. Chakrabarti
Indian Center for Space Physics, 466 Barakhola, Netai Nagar, Kolkata 700099, India
We study the recent outburst of the black hole candidate EXO 1846-031 which went into an outburst in 2019
after almost 34 years in quiescence. We use archival data from Swift/XRT, MAXI/GSC, NICER/XTI and NuSTAR/FPM
satellites/instruments to study the evolution of the spectral and temporal properties of the source during
the outburst. Low energy X-ray flux of the outburst shows multiple peaks making it a multipeak outburst.
Evolving type-C quasi-periodic oscillations (QPOs) are observed in the NICER data in the hard, hard intermediate
and soft intermediate states. We use the physical Two Component Advective Flow (TCAF) model to analyze the combined
spectra of multiple satellite instruments. According to the TCAF model, the accreting matter is divided into Keplerian
and sub-Keplerian parts, and the variation in the observed spectra in different spectral states arises out of the variable
contributions of these two types of accreting matter in the total accretion rate. Studying the evolution of the accretion
rates and other properties of the accretion flow obtained from the spectral analysis, we show how the multiple peaks in the outburst
flux arises out of discontinuous supply and different radial velocities of two types of accreting matter from the pile-up radius.
We detect an Fe emission line at ∼6.6 keV in the hard and the intermediate states in the NICER spectra. We determine the probable
mass of the black hole to be 12.43^+0.14_-0.03 M_⊙ from the spectral analysis with the TCAF model. We also estimate viscous time
scale of the source in this outburst to be ∼ 8 days from the peak difference of the Keplerian and sub-Keplerian mass accretion rates.
§ INTRODUCTION
A Low mass Black hole X-ray binary system (BHXRBs) consists of a stellar-mass main-sequence star orbiting around a
stellar-mass black hole (SMBH). Transient BHXRBs spend most of their lifetime in a quiescent state, exhibiting very low X-ray
luminosity (L_X ∼ 10^30-33 ergs/s; Tetarenko et al. 2016). Occasionally transient BHXRBs show bright outbursts,
lasting for a few weeks to a few months, during which the source becomes extremely luminous
(L_X ∼ 10^37-38 ergs/s; Tanaka & Shibazaki 1996).
Due to its non-zero angular momentum, matter from the companion star accretes onto the black hole (BH), forming an inward-spiralling accretion disk.
The accumulating matter heats up the disk, and the matter in the disk gets ionized causing thermal-viscous instability (Dubus et al. 2001; Lasota 2001).
As a result of the instability, the viscosity of the ionized matter in the outer disk increases suddenly. This causes more angular momentum
to be redistributed outward, and the accretion rate in the inner disk increases rapidly, triggering an outburst
(Chakrabarti & Titarchuk 1995; Ebisawa et al. 1996; Chakrabarti 2013). During an outburst, low mass BHXRBs go through
a succession of `accretion states', showing a rapid change in their temporal and spectral properties (Fender et al. 2004;
Homan & Belloni 2005; McClintock & Remillard, 2006). During the initial phase of the outburst, the source luminosity is
low and the energy spectrum can be approximated with a hard non-thermal power-law component. This state is called
the hard state (HS). As the outburst progresses, the source transits through the hard-intermediate state (HIMS) and
soft-intermediate state (SIMS), when the source luminosity gradually increases and the contribution of the low energy thermal
photons increase, which gradually softens the spectrum. The source luminosity becomes maximum in the soft state (SS) when the spectrum is dominated by
a thermal multicolor disk blackbody. After that, the source luminosity gradually decreases, and the source transits through
SIMS, HIMS and finally, to the HS. Low-frequency peaked and narrow noise components called quasi-periodic oscillations (QPOs)
has been observed in the power-density spectra (PDS) of most BHXRBs. Their properties (centroid frequency, Q-value, rms amplitude
and noise) also vary depending on the spectral state, and Casella et al. (2005) have classified these LFQPOs into three types:
A, B, and C. Generally, type-C QPOs with monotonically increasing or decreasing centroid frequency can be observed in the HS and
HIMS, while no QPOs are observed in the SS. The evolution of these spectral and temporal properties are strongly correlated, which is
manifested in the `Hardness-Intensity Diagram' (HID; Belloni et al. 2005; Debnath et al. 2008) or the `Accretion Rate Ratio-Intensity Diagram'
(ARRID; Jana et al. 2016).
Two separate mechanisms are responsible for the production of low and high-energy X-ray radiation from the accretion disks.
An optically thick, geometrically thin Keplerian flow dissipates the gravitational energy of the accreting matter through
viscosity and emits multicolor thermal blackbody photons (Novikov & Thorne 1973; Shakura & Sunyaev 1973).
When these low-energy photons get intercepted by a hot electron cloud, they get repeatedly inverse Comptonised and are
emitted as high-energy X-rays (Sunyaev & Titarchuk 1980, 1985). While there is general agreement about the emission mechanisms,
the actual nature of the hot electron cloud or the Compton cloud has been a matter of debate.
According to the Two-Component Advective Flow (TCAF) model (Chakrabarti & Titarchuk 1995; Chakrabarti 1997, Chakrabarti 2018),
the CENtrifugal pressure supported BOundary Layer (CENBOL) acts as the Compton cloud. This CENBOL is formed near the black hole when the
low viscous, freely falling sub-Keplerian matter comes to a halt as the outward centrifugal pressure becomes comparable to the inward
gravitational force, and it forms a standing or oscillating shock. The post-shock region becomes hot and puffs up and forms a torus-like region of
hot ionised matter. In the equatorial region, the viscosity remains higher than a certain critical value to maintain Keplerian angular momentum,
and this Keplerian matter becomes optically thick and emits the multicolor soft photons which are then partially intercepted by the CENBOL
and emitted as hard non-thermal photons. In the TCAF model, any observed spectrum depends on four independent flow parameters,
i.e. the accretion rates of the Keplerian and the sub-Keplerian matter, the location of the shock which is the outer boundary of CENBOL,
and the ratio of the post-shock to pre-shock matter densities (compression ratio). Additionally, it also depends on the mass of the BH
and a normalization factor which is the ratio of emitted to observed X-ray spectrum, both of which are constants for a given source.
As an outburst starts, the faster and hotter sub-Keplerian matter rushes towards the BH and forms the CENBOL which increases the hard Comptonised
flux. The Keplerian matter, which has a low velocity due to the higher viscosity, gradually moves towards the BH and cools down the CENBOL.
The CENBOL region shrinks in size, the hard photon flux decreases and the spectra become gradually softer. As the outer boundary of the CENBOL
oscillates (e.g. due to a resonance between the Compton cooling and compressional heating), the Comptonized hard X-ray intensity also varies
which gives rise to the observed quasi-periodic oscillations. This CENBOL also acts as the base of the jet/outflows.
To study how the physical flow parameters vary during an outburst and to estimate the intrinsic parameters of the BH, this TCAF model has been
incorporated into the spectral analysis software package pcrXSPEC (Arnaud, 1996) as a local additive table model
(Debnath et al. 2014, 2015). So far, the TCAF model has been used to study the accretion flow dynamics of more than fifteen BHXRBs
(Mondal et al. 2016; Debnath et al. 2017; Chatterjee et al. 2021). Intrinsic parameters, like the
BH mass and its distance have been estimated (Molla et al. 2017; Chatterjee et al. 2019; Jana et al. 2020a; Nath et al. 2023). The origin of QPOs
and jet/outflows has also been successfully studied using this model (Mondal et al. 2015; Chakrabarti et al. 2015; Chatterjee et al. 2016, Jana et al. 2017; Debnath et al. 2021)
Galactic X-ray transient EXO 1846-031 was first discovered by EXOSAT during its outburst in 1985 (Parmar & White 1985).
Based on the ultra-soft component in the spectra of this outburst, Parmar et al. (1993) indicated the source EXO 1846-031 is a BH candidate.
After the first outburst, the source remained in quiescence for almost 34 years. Recently, the source was again found to be in outburst by
MAXI on 2019 July 23 (Negoro et al. 2019). Evolving Type-C QPOs were observed in the Insight-HXMT and NICER data
(Liu et al. 2021) which is generally observed in BHXRBs. From strong reflection features in the NuSTAR spectrum, Draghis et al. (2020)
suggested EXO 1846-031 to be a BH with nearly maximal spin (a=0.99^+0.002_-0.001) with a disk inclination of θ≈73^∘.
From Insight-HXMT and NuSTAR data, Wang et al. (2021) found signatures of an ionised disk wind with velocities up to 0.06c.
They suggest EXO 1846-031 is a low inclination system with θ≈40^∘. Ren et al. (2022) investigated the spectral evolution from
Insight-HXMT data and suggested that the maximal spin found by Draghis et al. (2020) might be affected by choice of a different
hardening factor (f_col). Evidence of the presence of a pair of 3:2 ratio high-frequency quasi-periodic oscillations (HFQPO)
was found, and based on this the probable mass of the source was determined to be 3.4±0.2 M_⊙ (Strohmayer & Nicer Observatory
Science Working Group 2020).
Analysing the radio observations from MeerKAT, VLA and AMI-LA, Williams et al. (2022) suggested a distance range of
2.4–7.5 kpc, and a jet speed of β_int=0.29c.
We study the spectral and temporal properties of EXO 1846-031 during its 2019 outburst using Swift/XRT, Swift/BAT,
MAXI/GSC, NICER/XTI and NuSTAR/FPM data with the TCAF model in this paper. We discuss the observation, data reduction,
and analysis procedures briefly In 2. In 3 we present the results of our analysis. In 4, we discuss the
obtained results and draw conclusions.
1.0cm
§ OBSERVATION AND DATA ANALYSIS
§.§ Observations
We study the 2019-2020 outburst of EXO 1846-031 using archival data from Swift (Gehrels et al. 2004), NICER (Gendreau et al. 2012),
MAXI (Matsuoka et al. 2009), and NuSTAR (Harrison et al. 2013) satellites. We study the evolution of the X-ray fluxes in the soft
and hard energy bands and their ratios using MAXI/GSC (2-10 keV) and Swift/BAT (15-50 keV) data of ∼ 10 months from 2019 July 9 (MJD=58673)
to 2020 April 10 (MJD=58949). For the detailed temporal and spectral study, we use data from Swift/XRT, NICER/XTI, MAXI/GSC and NuSTAR/FPM
satellites/instruments.
Although NICER and Swift monitored the source regularly in the rising phase of the outburst, during the declining phase, there is a data gap
of ∼ 3 months for Swift and ∼ 4 months for NICER. We use 14 data of NICER/XTI (1-11 keV) and 11 data of Swift/XRT (1-10 keV)
for spectral analysis. To study the spectra in a wider energy band, we also use MAXI/GSC (7-20 keV) and NuSTAR/FPM (4-79 keV) simultaneously
with NICER and Swift data. A detailed log of the observations is given in Table 1.
§.§ Data Reduction
§.§.§ Swift
Swift/XRT window timing (WT) mode data were used in our analysis. Level 1 data files obtained from the archive are processed with the qcrXRTPIPELINE task to
produce Level 2 clean event files. A circular region of radius 30” around the source location is then used to extract the source
spectra and a region of the same radius is chosen away from the source to extract the background spectra using the tool qcrXSELECT. ARF files
are created using the tool qcrXRTMKARF and corresponding RMFs are obtained from the qcrCALDB.
Using the qcrGRPPHA tool, the spectra are rebinned to have at least 20 counts/bin.
Swift/BAT daily lightcurves are obtained from the Swift https://swift.gsfc.nasa.gov/results/transients/weak/EXO1846-031/website.
§.§.§ NICER
NICER is an external payload attached to the International Space Station which has an X-ray timing instrument (XTI; Gendreau et al. 2012)
working in the energy range 0.2-12 keV with a timing resolution of ∼100 ns and spectral resolution of ∼85 eV at 1 keV. For analysis,
the Level 1 data files are processed with qcrnicerl2 script in the latest caldb environment (ver. xti20221001) to obtain
Level 2 clean event files. The command qcrbarycorr is then used to apply barycentric correction to the event files.
The lightcurves and spectra are extracted from these barycentre-corrected event files using the qcrXSELECT task.
The qcrnibackgen3C50 tool (Remillard et al. 2022) is then used to simulate the background corresponding to each observation.
The spectra are then rebinned to have at least 20 counts/bin with the qcrGRPPHA task.
§.§.§ NuSTAR
NuSTAR raw data from the web archive is reduced with the NuSTAR data analysis software (qcrNuSTARDAS, version 1.4.1).
Cleaned event files are produced using the qcrnupipeline task in the presence of the latest calibration files.
With the qcrXSELECT task, a circular region of 60” centred at the source coordinates is chosen as the source region,
and a circular region with the same radius away from the source location is chosen as the background region.
The qcrnuproduct task is then used to extract the spectrum, ARF and RMF files.
The extracted spectra are then rebinned to have at least 30 counts/bin with the qcrGRPPHA task.
§.§.§ MAXI
MAXI/GSC spectra are obtained using the http://maxi.riken.jp/mxondem/MAXI on-demand process web tool (Matsuoka et al. 2009).
To study the evolution of the X-ray fluxes, daily average lightcurves are obtained from the MAXI http://maxi.riken.jp/star_data/J1849-030/J1849-030.htmlwebsite.
§.§ Data Analysis
Daily average light curve data of MAXI/GSC and Swift/BAT are used to study the variation of the X-ray flux in various energy bands
throughout the outburst. To study the hardness variations, we use two types of hardness ratios, namely hardness ratio 1 (HR1)
i.e. the ratio of 15-50 keV Swift/BAT flux in mCrab to 2-10 keV MAXI/GSC flux in mCrab, and hardness ratio 2 (HR2) i.e. the ratio
of 4-10 keV to 2-4 keV MAXI/GSC flux. To search for the presence of LFQPOs, we use the qcrpowspec task
to generate power density spectra (PDS) from 0.01 s time binned light curves of NICER. The light curve of a total observation is separated
into a number of intervals, each of which contains 8192 newbins. For each interval, a PDS is created, and these individual PDSs
are averaged to generate a single PDS which was then geometrically rebinned. We model the PDSs with multiple Lorentzian models
in qcrXSPEC version 12.11.1 (Arnaud 1996) to account for the broadband noise, QPOs and its harmonics.
From the fits we obtain the QPO frequencies (ν_QPO), width (Δν), Q-value (Q=ν_QPO/Δν) and RMS (%) amplitude.
We utilize HEASARC's spectral analysis software package qcrXSPEC version 12.11.1 (Arnaud 1996)
for analyzing the spectra. All the spectra are fitted with the TCAF model based local additive table model (Debnath et al. 2014).
To fit spectra using the TCAF model, four input flow parameters are essential:
(1) the Keplerian disk accretion rate (ṁ_d in Ṁ_Edd),
(2) the sub-Keplerian halo accretion rate (ṁ_h in Ṁ_Edd),
(3) the shock location (X_s in Schwarzschild radius r_s=2 GM_BH/c^2), and
(4) the dimensionless shock compression ratio (R = ρ_+/ρ_-, ratio of the post-shock to the pre-shock matter density).
In addition, one system parameter, i.e., the mass of the BH (M_BH in M_⊙) and one instrument parameter, i.e. the model normalization (N)
are required. To account for the interstellar absorption, we use the qcrTBabs model with qcrvern
cross-sections (Verner et al. 1996) and qcrwilm abundances (Wilms et al. 2000).
We use the qcrsmedge model to account for the instrumental features in the NICER spectra at ∼1.8 keV.
§ RESULTS
After almost 34 years in quiescence, EXO 1846-031 again went into an outburst on 2019 July 23 (MJD 58687) which lasted for ∼10 months.
To examine the nature of the outburst and the accretion flow properties during the outburst, we carried out a detailed temporal and
spectral study using data from multiple satellites. The results of the study are presented below.
§.§ Temporal Properties
To study the outburst profile in different energy bands and the variation of hardness ratios, we use MAXI/GSC and Swift/BAT
daily light curve data. To study the low timescale variability features, we use NICER/XTI data due to its high
temporal resolution.
§.§.§ Outburst Profile and Hardness Ratios
We show the variation of X-ray fluxes in different energy bands and their hardness ratios
from 2019 July 9 (MJD=58673) to 2020 April 10 (MJD=58949) in various panels of Fig. 1.
The variation of the Swift/BAT (15-50 keV) flux and the MAXI/GSC (2-10 keV) flux is shown in panel (a),
while the variation of their hardness ratio (HR1) is shown in panel (b). Likewise, panel (c) shows
the variation of MAXI/GSC flux in lower (2-4 keV) and higher (4-10 keV) energy bands while panel (d)
shows the variation in their hardness ratio (HR2). From the Figure, we can observe that at the start
of the outburst, both soft and hard fluxes increased rapidly, and the 15-50 keV Swift/BAT flux
attained a maximum on MJD 58697, roughly 8 days before the softer (2-4 keV and 4-10 keV) MAXI/GSC fluxes.
The hardness ratios (HR1 and HR2) also increased and attained a maximum around MJD 58691 and decreased
quickly to a low level. After the initial maximum, the Swift/BAT flux slowly decreased and decayed into
quiescence at the end of the outburst. On the other hand, after the maximum around MJD 58705, the MAXI/GSC
fluxes (in different energy bands) decreased for ∼13 days and then started to increase again. They
attained a maximum around MJD 58736 and then decreased with an almost constant rate for ∼65 days. After
that, the GSC fluxes remained at a constant low level till the end of the outburst.
Looking at the outburst profile, we can see that this 2019 outburst of EXO 1846-031 has shown two stronger flux peaks in the rising phase
and two very weak peaks in the declining phase. To estimate the total amount of flux released during each of the peaks, we fit the 2-10 keV
MAXI/GSC lightcurve using FRED profiles (Kocevski et al. 2003). A combination of four FRED profiles are used to fit the complete outburst (Fig. 2)
(see, Chakrabarti et al. 2019, Bhowmick et al. 2021, Chatterjee et al. 2022 for more details). In the Fig. 2, the blue curve marks the combined
fit of the entire outburst and the red curves mark individual FRED fitted peaks of the outburst. We choose 12 mCrab as the threshold of flux
for the outburst. The horizontal black line indicates the 12 mCrab flux value. Two vertical lines mark the start and the end of the outburst when
the X-ray flux rises above and below this 12 mCrab threshold. The total integrated X-ray flux (IFX_tot) of the complete outburst calculated
from the combined fit is 39.70^+3.29_-3.05 Crab. The individual integrated flux values (IFX) of the first, second, third and fourth peaks are
6.31^+0.26_-0.25 Crab, 30.82^+2.60_-2.42 Crab, 1.77^+0.60_-0.38 Crab and 0.80^+0.01_-0.16 Crab respectively. IFX values depict
the amount of energy release in each peaks. Comparing the IFX values of the four peaks, we can conclude that maximum amount of energy was released
during the second peak, i.e., maximum amount of matter got cleared during the time period of the second peak of the outburst.
§.§.§ Power Density Spectra (PDS)
To study the presence and evolution of LFQPOs during the outburst, we use 0.01 s time-binned NICER
light curves. We use zero-centred Lorentzian models to fit the broad noise components and narrow
Lorentzians to fit the QPO peaks to determine the centroid frequencies, Q-values, rms amplitudes, etc.
We find the presence of QPOs in 19 NICER observations in the initial phase of the outburst.
The observed QPOs can be classified as type-C which are characterized by a Q-value of ∼ 7-12 and
an amplitude of ∼3–16 % rms that are superposed on a broad flat-top noise (Casella et al. 2005).
Figure 3 shows a representative pds where a QPO of 3.24 Hz can be seen along with its harmonic at 6.52 Hz.
The QPOs are found in the hard, the hard intermediate and the soft intermediate states which are discussed in detail in later sections.
§.§ Spectral Properties
We use data from Swift/XRT, NICER/XTI, MAXI/GSC and NuSTAR/FPM for spectral analysis in a broad
1-79 keV energy range. We mainly use the absorbed TCAF model to study the spectra. We use the
qcrTBabs model for absorption where the hydrogen column density (N_H)
was kept free. We found the N_H to vary between 5.12×10^22 cm^-2 and
10.94×10^22 cm^-2 during our analyses period. In the NICER spectra, edge-like
residuals are seen at ∼1.8 keV corresponding to the Silicon K edge which is an instrumental
feature typical for Si-based detectors (Alabarta et al. 2020, Miller et al. 2018).
We use the qcrsmedge model to account for it. An Fe-Kα emission line at ∼6.4 keV
is also observed in the NICER spectra of the initial rising phase which was fitted using the
qcrGaussian model. We jointly fit the XRT+GSC spectra with qcrconstant*TBabs*(TCAF)
model (Fig. 4a) and the NICER+GSC spectra with qcrconstant*TBabs*smedge(TCAF) or
qcrconstant*TBabs*smedge(TCAF+gaussian) model (Fig. 4b). In the NICER+NuSTAR spectra,
a dip is observed at ∼10 keV in the NuSTAR data. At first, we fit the spectra with qcrconstant*TBabs*smedge(TCAF+Gaussian) model
ignoring the dip, and obtain χ^2/DOF=1.79. To improve the statistic, we use the qcrgabs model to account for the dip
and get a good statistic with χ^2/DOF=0.91. The corresponding spectra are shown in Fig. 5(a–b). Detailed
results of our spectral analysis are shown in Table 2.
§.§.§ Evolution of the Spectral states
The evolution of various temporal and spectral parameters of our analysis with the TCAF model shows that the source has evolved through
different spectral states in this outburst. We get a rough estimation of the state evolution from the outburst profile and the variation of HRs.
From the variation of the spectral parameters, we get a clearer picture of the state evolution as they show the actual evolution
of the accretion flow dynamics, e.g. the change in the disk and halo accretion rates, the change of the position of the shock and its strength, etc.
In the Figure 6, we show the variation of the disk accretion rate (ṁ_d), the halo accretion rate (ṁ_h), the total accretion rate
(ṁ_d + ṁ_h) and the accretion rate ratio (ARR = ṁ_h/ṁ_d). In the Figure 7, we show the variation of the best fitted
mass parameter (M_BH), the shock location (X_s), the shock compression ratio (R) alongwith the evolution of the QPO centroid frequency.
Here we discuss the spectral states in detail.
(1) Rising Hard State (HS):
The source was in the hard state when we start our spectral analysis on 2019 July 31 (MJD 58695).
The total accretion rate was high in this state, and the maximum part of the accreting matter was sub-Keplerian
as the ṁ_h was higher than the ṁ_d by almost ∼3 times. The ARR was also high in this state,
which started to decrease gradually as ṁ_d started to increase and ṁ_h started to decrease
as the outburst progressed. The shock started to move towards the BH from a faraway location (460r_s),
but its strength was almost constant in this period (R∼1.5). Two LFQPOs were found in this state
whose centroid frequency increased from 0.25 Hz to 0.41 Hz. High HR was also observed in this state
as the hard flux (Swift/BAT) dominated the soft flux (MAXI/GSC). The source remained in this state until 2019 August 2 (MJD 58697).
(2) Rising Hard Intermediate State (HIMS):
After MJD 58697, the total accretion rate started to decrease as the previously dominant ṁ_h
started to decrease rapidly. The total accretion rate began to increase again after 2019 August 5 (MJD 58700)
as ṁ_d started to increase and became dominant. The ARR decreased steadily in this state. Likewise,
the shock moved inward rapidly, moving from ∼325 r_s to ∼37 r_s in ∼7 days with decreasing strength.
Nine LFQPOs were found in this state whose centroid frequency increased rapidly to ∼7 Hz.
The HR decreased in this state as the dominating hard flux began to decrease and soft flux increased steadily.
The source stayed in this state until 2019 August 8 (MJD 58703).
(3) Rising Soft Intermediate State (SIMS):
The total accretion rate decreased and became roughly constant at a low level after MJD 58703.
Both the ṁ_d and ṁ_h became steady, with ṁ_d dominating over the ṁ_h.
The shock ceased to move towards the BH and came to a halt at ∼35r_s and its strength also became constant.
We found eight LFQPOs during the initial part of this state, with their centroid frequency showing a slowly
decreasing trend. The hard flux and the soft flux both decreased in this state, causing the HR to become low.
This state of the outburst continued until 2019, August 31 (MJD 58726).
(4) Soft State/High Soft State (SS/HSS):
After MJD 58726, the soft fluxes began to increase rapidly again which is quite unusual. An abrupt change has
taken place in the accretion process. The hard 15-50 keV flux remained low, and this shows that the change
in the accretion process has only affected the fluxes below 10 keV. The soft fluxes increased up to
2019 September 10 (MJD 58736) and then decreased almost linearly until 2019 November 14 (MJD 58801)
and became steady at a low level. The HRs also became low, signifying the source had transitioned into a
soft state/high soft state. Although XRT and NICER spectra were available for some days at the start of this
state, the TCAF fit of these spectra was statistically unacceptable, which shows that the two component configuration
of the accretion flow had been violated. We discuss this in detail in 4. After November 2019, spectral data is unavailable
for ∼ 4 months, due to the source becoming sun-constrained (Williams et al. 2022). Hence it became impossible
to determine how long the source was in the soft state.
(5) Declining Hard State (HS):
After 2020 February 26 (MJD 58905), Swift/XRT data became available for spectral analysis.
The total accretion rate, the ṁ_d and the ṁ_h all were low in this period.
On the other hand, the ARR became high, and the shock also moved outward at ∼250r_s with increased strength.
All of these show that the source had already transitioned into the declining hard state.
§.§.§ Estimation of BH mass from spectral analysis
Mass of the BH (M_BH) is an important parameter for spectral fitting with TCAF. Mass of the BH
in EXO 1846-031 was previously determined to be 3.24±0.2 M_⊙ based on the presence of
3:2 ratio HFQPOs (Strohmayer & Nicer Observatory Science Working Group 2020). Initially, we tried
to fit the spectra with TCAF keeping the mass parameter frozen at this value. But the resulting reduced chi-squares were high
and the fits were statistically unacceptable. Hence we keep the mass parameter free during further analysis with TCAF.
From our spectral fits, we find the best fitted M_BH values to vary between 7.1-12.6 M_⊙.
However, mass of a BH in a BHXRB system is not supposed to change significantly during the course of an outburst.
The spread in the mass values obtained from TCAF fits results from random measurement errors, and they do not show
the variation of the actual BH mass during the outburst. In our spectral analysis, we fitted the spectra of different
energy bands obtained from multiple instruments of different effective areas, which may also contributes to the measurement
errors in the mass of the BH. To reduce such errors, we perform a global fit using all spectra in different epochs.
We use the model qcrconstant*TBabs*smedge(TCAF+gaussian) and keep the mass parameter linked
for all spectra. The joint fit is shown in Fig. 8. From the global fit, we obtain a mass value of
the source as 12.43^+0.14_-0.03 M_⊙.
§ DISCUSSIONS AND CONCLUDING REMARKS
EXO 1846-031 is a galactic black hole candidate that went into an outburst in July 2019 after remaining almost 34 years
in quiescence after its discovery in 1985. We study the evolution of the temporal and spectral properties of the
source during the outburst using observational data from Swift/XRT, Swift/BAT, NICER/XTI, MAXI/GSC and NuSTAR/FPM
satellites/instruments. For the spectral analysis we use the physical TCAF model and fit NICER (1–10 keV), combined
NICER+GSC (1–20 keV), XRT+GSC (1–20 keV) and NICER+NuSTAR (1–79 keV) spectra for 25 epochs spread over the outburst duration.
From our spectral fits, we obtain flow parameters of the system such as the Keplerian disk accretion rate (ṁ_d),
the sub-Keplerian halo accretion rate (ṁ_h), the shock location (X_s), and the shock compression ratio (R).
As these flow parameters evolve during the outburst, we gain insight into how the accretion flow of matter changes
and produces different kinds of spectral and temporal variability. We also estimate the mass of the black hole from our spectral analysis.
Generally, transient black hole outbursts show two types of outburst profiles, fast rise exponential decay (FRED) or
slow rise slow decay (SRSD) (Debnath et al. 2010). However, in the case of some BHXRBs, the X-ray flux does not decay
into quiescence after the first outburst peak. Rather, they show one or more peaks after the main outburst peak
before eventually going into the quiescence phase. In literature, such phenomena are known as “reflares”, “rebrightenings”
or “mini-outbursts” (e.g. GRO J0422+32, MAXI J1659-152, GRS 1739-278, MAXI J1910-057;
Chen et al. 1997, Homan et al. 2013, Yan & Yu 2017, Nath et al. 2023). For the 2019 outburst of EXO 1846-031, we can see from
Fig. 1 that both the soft and hard fluxes increase rapidly at the start of the outburst. While the hard flux decayed
slowly after attaining its maximum, the soft flux, though it started to decrease initially, began to increase
again and attained a maximum comparable with the first peak. This outburst can be classified as a multipeak outburst
according to the re-brightening classification scheme of Zhang et al. (2019). As matter gets accumulated at
the pile-up radius (X_p; Chakrabarti et al. 2019; Bhowmick et al. 2021; Chatterjee et al. 2022)
in the quiescence phase, before an outburst, it is heated up and gets ionized. This ionised matter
causes a thermal-viscous instability in the matter. This instability increases the viscosity in the disk
causing an increased outward redistribution of angular momentum. This causes the matter to flow rapidly inward
onto the BH, triggering the outburst (Lasota 2001; Dubus et al. 2001; Hameury 2020). However, this disk
instability model (DIM) cannot explain these re-brightenings/mini-outbursts phenomena very well. Although several
models have been proposed that explain the reflares (e.g., Kuulkers et al. 1994; Hameury et al. 2000; Zhang et al. 2019),
none of them are well verified. Hence we investigate the rebrightening phenomena of EXO 1846-031 with the TCAF picture.
During the 2019 outburst, EXO 1846-031 showed two brighter (in the rising phase) and two dimmer peaks (in the declining phase)
in the low energy outburst profile. We fitted the 2-10 keV MAXI/GSC outburst profile with multiple FRED models, and from this fit we estimated
that the total integrated flux released in the outburst is 39.70^+3.29_-3.05 Crab. The contribution of individual peaks calculated from
the individual FRED profiles are 16%, 78%, 4% and 2% respectively for the first, second, third and fourth peaks. Here we observe that
although the peak fluxes are roughly same, five times more energy is released during the second peak than the first peak, and this
indicates that most of the matter has been released from the X_p during the second peak. This is quite uncommon in transient BHXRBs.
At the start of the outburst, when the viscosity at the pile-up radius increased above the critical value,
matter began to rush inward. We can see from Fig. 6 that the halo rate is high compared to the disk rate.
As the sub-Keplerian matter has low viscosity, it falls freely towards the BH, whereas the Keplerian matter
has large viscosity and it moves inward slower in viscous timescale. The sub-Keplerian matter reaches
the BH faster than the Keplerian matter, and the halo rate attains peak value before the disk rate.
From Fig. 7, we see that the shock is far away in this initial phase. As there is no Keplerian matter to cool
the faster-moving sub-Keplerian matter, it forms a large CENBOL, and this CENBOL inverse Comptonizes most of
the soft thermal photons and produces a large number of hard photons. Hence we can see from Fig. 1 that the
high energy fluxes dominate the low energy fluxes making the HRs high and the source goes into the rising
hard state. After MJD 58697, the Keplerian matter begins to cool down the sub-Keplerian matter as it
gradually moves towards the BH. The disk rate starts to increase and the halo rate decreases. The CENBOL
shrinks in size, and the shock, which is the outer boundary of CENBOL, moves closer to the BH and decreases
in strength. Hence the inverse-Comptonization is reduced, the hard flux begins to decrease, the thermal flux
increases, the HRs decrease, and the source goes into the hard intermediate state. As the supply of accreting matter
gradually decreases, both the disk rate and halo rate decrease, and the CENBOL shrinks farther and the shock
moves very closer to the BH. Both the soft and the hard flux decrease, the HRs are decreased to a very low level
and the source goes into a soft intermediate state. In all of these three states, we find the presence of low-frequency
quasi-periodic oscillations (LFQPO). In the TCAF picture, LFQPOs are produced due to the oscillation of
the shock, i.e. the outer boundary of the CENBOL. The centroid frequency of the LFQPO (ν_QPO) is roughly inversely
proportional to the location of the shock (r_s) (ν_QPO∼ 1/r_s(r_s-1)^1/2: Chakrabarti et al. 2008, Debnath et al. 2010).
As we can see from Fig. 7, as the shock moves closer to the BH in the HS and HIMS, the centroid frequency of the QPO
increases. As the shock becomes almost steady in the SIMS, the QPO frequency also becomes steady.
After some days in the SIMS (∼ MJD 58715), the value of the compression ratio becomes close to one
and the halo rate becomes close to zero. This signifies that the post-shock and pre-shock matter density is equal,
which means that the shock has become very weak or disappeared and it has moved very close to the black hole.
As the shock disappears, the sub-Keplerian and Keplerian components of the accretion flow becomes essentially the same.
This very weak shock was unable to produce any QPOs, hence the QPO disappear gradually at the later stage of the SIMS.
After MJD 58726, the soft fluxes began to increase again while the hard fluxes remained low, which shows that
there is an increase in the thermal emission without much of it being inverse-Comptonized.
Although some NICER and XRT spectra are available in this phase, we failed to fit these spectra with the TCAF model.
These indicate that the two component configuration of the accretion flow is no longer maintained in this period.
The sharp increase in the soft fluxes indicates that the supply of the Keplerian matter has increased.
This increased supply of Keplerian matter has cooled down the remaining sub-Keplerian matter and only the
Keplerian disk accretion is occurring in this state. According to previous studies (Chakrabarti et al. 2019,
Bhowmick et al. 2021, Chatterjee et al. 2022), accreting matter supplied from the companion starts to accumulate at
the pile-up radius (X_p) during the quiescence phase prior to an outburst, and the longer the duration
of the quiescence phase, the more is the matter accumulation. Prior to the outburst in 2019, EXO 1846-031 was
in the quiescence phase for a long time (∼ 34 years). This is very similar to the 2003 outburst of the
source H 1743-322 when it remained inactive for 25 years before the outburst (Chakrabarti et al. 2019).
Similar to the case of H 1743-322, this long period of inactivity of EXO 1846-031 indicates a large amount of matter was
accumulated at the X_p before the outburst. This accumulated matter starts to heat up the accretion flow
and gives rise to a convective instability which increases the viscosity due to the resulting turbulence.
As the viscosity at X_p increased above the critical value, the outburst was triggered.
However, the increase in viscosity was not large enough to release all
of the accumulated matter from the pile-up radius. As the sub-Keplerian matter moves faster, all of it
gets depleted quickly and the source enters the SIMS, which could also be interpreted as the declining state
of the first failed (as the soft state is not reached) outburst. At the end of the SIMS, viscosity at the X_p
increases again, and the remaining Keplerian matter is released triggering the reflare event.
We find an evidence of a broad absorption feature in the SIMS around ∼ 10 keV which we model with
qcrgabs with a line energy of 9.71±0.23 keV. This kind of absorption feature
could indicate a presence of highly ionised high-velocity winds from the accretion disk (Prabhakar et al. 2023),
which in turn indicates that the radiation pressure in the disk is very high. This large amount of radiation
irradiates the remaining matter at X_p, and an instability builds up again creating a situation similar
to the initial phase before an outburst. This instability again increases the viscosity at X_p and
matter starts to accrete again towards the BH. Majority of the sub-Keplerian matter was accreted
during the first peak, and this new accretion consists largly of high viscous Keplerian matter.
This Keplerian matter interacts with the remaining small amounts of sub-Keplerian matter and cools it down.
From Fig. 1, we can see that after attaining the second maximum, the soft flux decreases almost linearly
instead of an exponential decline, which is another indication that only the comparatively slow moving Keplerian
matter is responsible for this reflare. After ∼ MJD 58800, the source became Sun-constrained
and there is no available data for spectral analysis in the period between MJD 58808 and MJD 58904.
Hence the exact date when the source came out of the SS cannot be determined.
After MJD 58905, spectral analysis shows that the shock has moved outward at ∼ 250 r_s with an
increased ARR. This indicates the source has already moved into the declining hard state.
The time taken by the high viscous matter to reach the BH from the pile-up radius is termed
as the viscous timescale (Chakrabarti et al. 2019). Due to its low viscosity, the sub-Keplerian matter
moves toward the BH in freefall timescale, whereas the Keplerian matter takes more time to reach the BH due
to its higher viscosity. Due to this reason, it is generally observed that the halo accretion rate attains
its peak before the disk rate, and the time difference between disk and halo peaks gives us an idea to infer
viscous timescale of the source (Debnath et al. 2015, Jana et al. 2016, 2020b). From Fig. 1, we can see
that 15-50 keV Swift/BAT hard flux attains a peak on MJD 58796, and the 2-4 keV MAXI/GSC soft flux attains a peak
∼ 8 days later on MJD 58705. A similar delay between the peaks of halo and disk rates is also found (see Fig. 6).
Hence we estimate the viscous timescale of this source to be ∼ 8 days. This large viscous timescale indicates
that X_p is far away from the BH and size of the accretion disk is large.
The mass of the BH in this source has not yet been measured dynamically, so we try to estimate the mass
from our spectral fits. The spectra emitted from the accretion processes around a BH is highly dependent
on its mass as various features of the accretion dynamics such as the size of the CENBOL and the electron
number density inside it, soft radiation intensity from the Keplerian disk, etc. depend on the mass. We allow
the M_BH parameter to vary freely during our spectral analysis so that we get a best fitted value of the
parameter from each spectral fit. We find the best fitted values of the parameter to vary in the range
7.1-12.6 M_⊙. This spread in the mass value is a consequence of systematic errors
due to the use of data from multiple instruments with different energy range and effective areas.
To reduce such errors, we jointly fit all the spectra of different epochs and estimate the most probable
mass of the source to be 12.43^+0.14_-0.03 M_⊙.
§ SUMMARY AND CONCLUSIONS
We study the spectral and temporal properties of the source EXO 1846-031 during its 2019 outburst after almost 34
years in quiescence. We use MAXI/GSC and Swift/BAT daily lightcurve data to study the evolution of the X-ray
fluxes and hardness ratios during the outburst. We use the FRED profile to fit the outburst flux profile and
estimate the contribution of each flux peak in the total amount of flux released during the outburst.
We use data from multiple instruments (Swift/XRT, MAXI/GSC, NICER/XTI, NuSTAR/FPM) for a broadband spectral study over a period of 222 days.
We perform our spectral study using physical TCAF model. Based on our spectral analysis, the following conclusions can be drawn:
* After 34 years in quiescence, EXO 1846-031 showed an outburst in 2019 that can be classified as a multipeak outburst.
* Before the start of the outburst, a large amount of matter was accumulated at the pile up radius X_p (located far away
from the BH) and all the matter was not accreted during the first outburst peak.
* The broad absorption feature around ∼ 9 keV indicates the presence of a fast moving highly ionized disk wind in the rising SIMS.
* As the high X-ray flux irradiates the remaining matter at X_p, the viscosity increases and starts a fresh accretion of
matter triggering the reflare.
* This increased supply of high viscous Keplerian matter in the reflaring event cools down and washes off the sub-Keplerian matter,
and only Keplerian disk accretion happens in the HSS.
* Although the source showed two brighter and two dimmer peaks during the outburst, ∼ 78% of total energy has been released in the second
flaring event.
* From spectral fitting with TCAF, probable mass of the source was estimated to be 12.43^+0.14_-0.03 M_⊙.
* From the disk and halo peak difference in the rising phase of the outburst, we estimated viscous time scale of the source to be ∼ 8 days.
§ ACKNOWLEDGEMENTS
This work made use of Swift/XRT, Swift/BAT, NICER/XTI, and NuSTAR/FPM data supplied by the UK Swift Science Data Centre at the University of Leicester,
and MAXI/GSC data were provided by RIKEN, JAXA, and the MAXI team. S.K.N. acknowledges support from the SVMCM fellowship, the government of West Bengal.
S.K.N. and D.D. acknowledge support from the ISRO-sponsored RESPOND project (ISRO/RES/2/418/17-18) fund. D.D. and K.C. acknowledge visiting research grants
of National Tsing Hua University, Taiwan (NSTC 111-2811-M-007-066). R.B. acknowledges support from the CSIR-UGC fellowship (June-2018, 527223).
H.-K. C. is supported by NSTC of Taiwan under grant 111-2112-M-007-019.
99
Arnaud, K. A. 1996, in ASP Conf. Ser. 101, Astronomical Data Analysis Software and Systems V, ed. G. H. Jacoby & J. Barnes (San Francisco, CA: ASP), 17
Alabarta K., et al., 2020, MNRAS, 497, 3896
Belloni, T., Homan, J., Casella, P., et al. 2005, A&A, 440, 207
Bhowmick, R., Debnath, D., Chatterjee, K., et al., 2021, ApJ, 910, 138
Casella, P., Belloni, T., & Stella, L., 2005, ApJ, 629, 403
Chakrabarti, S. K., & Titarchuk, L. G. 1995, ApJ, 455, 623
Chakrabarti, S. K., 1997, ApJ, 484, 313
Chakrabarti, S. K., Debnath, D., Nandi, A., et al. 2008, A&A, 489, L41
Chakrabarti, S. K. 2013, in Proc. Conf. Ser., Vol. 8, Astron. Soc. of India, ed. S. Das (Assam, India), 1
Chakrabarti, S. K., Mondal, S., & Debnath, D., 2015, MNRAS, 452, 3451
Chakrabarti, S. K., 2018, in Ruffini R., Jantzen R., Bianchi M., eds, Proc. 14th Marcel Grossman meeting. Study of Accretion processes Around Black Holes becomes Science: Tell Tale Observational Signatures of Two Component Advective Flows. World Scientific Press, Singapore
Chakrabarti, S. K., Debnath, D., & Nagarkoti, S. 2019, AdSpR, 63, 3749
Chatterjee, D., Debnath, D., Chakrabarti, S. K., Mondal, S., Jana, A., 2016, ApJ, 827, 88
Chatterjee, D., Debnath, D., Jana, A., Chakrabarti, S. K., 2019, AP&SS, 364, 14
Chatterjee, K., Debnath, D., et al. 2021, Ap&SS, 366, 63
Chatterjee, K., Debnath, D., Bhowmick, R, Nath, S. K., & Chatterjee, D., 2022, MNRAS, 510, 1128
Chen, W., Shrader, C. R., & Livio, M. 1997, ApJ, 491, 312
Debnath, D., Chakrabarti, S. K., Nandi, A., Mandal, S., 2008, Bull. Astron. Soc. India, 36, 151
Debnath, D., Chakrabarti, S. K., & Nandi, A. 2010, A&A, 520, 98
Debnath, D., Mondal, S., & Chakrabarti, S. K. 2014, MNRAS, 440, L121
Debnath, D., Mondal, S., & Chakrabarti, S. K. 2015, MNRAS, 447, 1984
Debnath, D., Jana, A., Chakrabarti, S. K., & Chatterjee, D. 2017, ApJ, 850, 92
Debnath, D., Chatterjee, K., Chatterjee, D., et al. 2021, MNRAS, 504, 4242
Draghis, P. A., Miller, J. M., Cackett, E. M., et al. 2020, ApJ, 900, 78
Dubus, G., Hameury, J.-M., & Lasota, J.-P. 2001, A&A, 373, 251
Ebisawa, K., Titarchuk, L. G., & Chakrabarti, S. K. 1996, PASJ, 48, 59
Fender, R. P., Belloni, T., Gallo, E. 2004, MNRAS, 355, 1105
Gendreau, K. C., Arzoumanian, Z., & Okajima, T. 2012, Proc. SPIE, 8443, 844313
Hameury, J.-M., Lasota, J.-P., Warner, B., 2000, A&A, 353, 244
Hameury, J. M., 2020, Advances in Space Research, 66, 1004
Homan, J., Belloni, T., 2005, AP&SS, 300, 107
Homan, J., Fridriksson, J. K., Jonker, P. G., et al. 2013, ApJ, 775, 9
Jana, A., Debnath, D., Chakrabarti, S. K., Mondal, S., Molla, A. A., 2016, ApJ, 819, 107
Jana, A., Debnath, D., Chatterjee, D., et al. 2020a, RAA, 20, 28
Jana, A., Debnath, D., Chatterjee, D., et al. 2020b, ApJ, 897, 3
Kocevski, D., Ryde, F., & Liang, E., 2003, ApJ, 596, 389
Kuulkers, E., van der Klis, M., Oosterbroek, T., Asai, K., Dotani, T., van Paradijs, J., Lewin, W. H. G., 1994, A&A, 289, 795
Lasota, J. P. 2001, NewAR, 45, 449
Liu, H.-X., Huang, Y., Xiao, G.-C., et al. 2021, RAA, 21, 070
Matsuoka, M., Kawasaki, K., Ueno, S., et al., 2009, PASJ, 61, 999
McClintock J. E., Remillard R. A., 2006, in Lewin W., van der Klis M., eds, Cambridge, Astrophysical Series 39: Compact Stellar X-ray Sources. Cambridge Univ. Press, Cambridge, p. 157
Miller, J. M., et al., 2018, ApJ, 860, L28
Molla, A. A., Chakrabarti, S. K., Debnath, D., Mondal, S., 2017, ApJ, 834, 88
Mondal, S., Chakrabarti, S. K., & Debnath, D., 2015, ApJ, 798, 57
Mondal, S., Chakrabarti, S. K., Debnath, D., 2016, Ap&SS, 361, 309
Nath, S. K., Debnath, D., Chatterjee, K., Jana, A., Chatterjee, D., & Bhowmick, R., 2023, AdSpR, 71(1), 1045
Negoro, H., Nakajima, M., Sugita, S., et al. 2019, ATel, 12968, 1
Novikov, I. D., & Thorne, K. S. 1973, in Black Holes (Les astres occlus), ed. C. DeWitt & B. DeWitt (New York: Gordon and Breach), 343
Parmar, A. N., & White, N. E. 1985, IAUC, 4051, 1
Parmar, A. N., Angelini, L., Roche, P., & White, N. E. 1993, A&A, 279, 179
Prabhakar, G., Mandal, S., Bhuvana, G. R., Nandi, A., 2023, MNRAS, 520, 4889
Remillard, R. A., Loewenstein, M., Steiner, J. F., et al. 2022, AJ, 163, 130
Ren, X. Q., Wang, Y., Zhang, S. N., et al. 2022, ApJ, 932, 66
Shakura, N. I., & Sunyaev, R. A. 1973, A&A, 24, 337
Strohmayer T. E., Nicer Observatory Science Working Group 2020, in American Astronomical Society Meeting Abstracts 235. p. 159.02
Sunyaev, R. A., & Titarchuk, L. G. 1980, ApJ, 86, 121
Tanaka, Y., & Shibazaki, N. 1996, ARA&A, 34, 607
Tetarenko, B. E., Sivakoff, G. R., Heinke, C. O., Gladstone, J. C., 2016, ApJS, 222, 15
Verner, D. A., Ferland, G. J., Korista, K. T., & Yakovlev, D. G. 1996, ApJ, 465, 487
Wang, Y., Ji, L., García, J. A., et al. 2021, ApJ, 906, 11
Williams, D. R. A., Motta, S. E., Fender, R., Miller-Jones, J. C. A., et al. 2022, MNRAS, 517, 2801
Wilms, J., Allen, A., & McCray, R. 2000, ApJ, 542, 914
Yan, Z., & Yu, W. 2017, MNRAS, 470, 4298
Zhang, G.-B., et al. 2019, ApJ, 876, 5
|
http://arxiv.org/abs/2307.04684v2 | 20230710163746 | FreeDrag: Point Tracking is Not What You Need for Interactive Point-based Image Editing | [
"Pengyang Ling",
"Lin Chen",
"Pan Zhang",
"Huaian Chen",
"Yi Jin"
] | cs.CV | [
"cs.CV",
"cs.HC",
"cs.LG"
] |
[
FreeDrag: Point Tracking is Not What You Need for Interactive Point-based
Image Editing
Pengyang Ling1*
Lin Chen1,2*
Pan Zhang2
Huaian Chen1†
Yi Jin1†
1University of Science and Technology of China
2Shanghai AI Laboratory
{lpyang27, chlin}@mail.ustc.edu.cn [email protected] {anchen, jinyi08}@ustc.edu.cn
August 12, 2023
==============================================================================================================================================================================================================================================================
type=figure
< g r a p h i c s >
figure The comparison between DragGAN <cit.> and our proposed FreeDrag. Given an image input, users can assign handle points (red points) and target points (blue points) to force the semantic positions of the handle points to reach corresponding target points. The examples presented on the left and right columns show the cases without/with masks specifying the editable region (brighter area), respectively. Code will be available on https://github.com/LPengYang/FreeDraghttps://github.com/LPengYang/FreeDrag.
]
*Equal Contribution † Corresponding Author
To serve the intricate and varied demands of image editing, precise and flexible manipulation of image content is indispensable. Recently, DragGAN <cit.> has achieved impressive editing results through point-based manipulation.
However, we have observed that DragGAN struggles with miss tracking, where DragGAN encounters difficulty in effectively tracking the desired handle points, and ambiguous tracking, where the tracked points are situated within other regions that bear resemblance to the handle points. To deal with the above issues, we propose FreeDrag, which adopts a feature-oriented approach to free the burden on point tracking within the point-oriented methodology of DragGAN. The FreeDrag incorporates adaptive template features, line search, and fuzzy localization techniques to perform stable and efficient point-based image editing. Extensive experiments demonstrate that our method is superior to the DragGAN and enables stable point-based editing in challenging scenarios with similar structures, fine details, or under multi-point targets.
§ INTRODUCTION
The domain of image editing utilizing generative models has gained substantial attention and witnessed remarkable advancements in recent years <cit.>. In order to effectively address the intricate and diverse demands of image editing in real-world applications, it becomes imperative to achieve precise and flexible manipulation of image content. Consequently, researchers have proposed two primary categories of methodologies in this domain: (1) harnessing prior 3D models <cit.> or manual annotations <cit.> to enhance control over generative models, and (2) employing textual guidance for conditional generative models <cit.>. Nevertheless, the former category of methodologies often faces challenges in generalizing to novel assets, while the latter category exhibits limitations in terms of precision and flexibility when it comes to spatial attribute editing.
To overcome these aforementioned limitations, a recent pioneering study, known as DragGAN <cit.>, has emerged as a remarkable contribution in the realm of precise image editing. This work has garnered significant attention, primarily due to its interactive point-based editing capability, termed "drag" editing. The DragGAN framework addresses the challenge by introducing a two-step iterative process: (1) a motion supervision step, which directs the handle points to migrate towards their corresponding target positions, and (2) a point tracking step, which consistently tracks the relocated handle points' positions. This innovative approach enables users to exert precise control over the editing process by specifying pairs of handle and target points on the given image.
Notwithstanding the praiseworthy achievements exhibited by DragGAN, there exist several issues, as shown in Figure <ref>. One issue is miss tracking, whereby DragGAN encounters difficulty in effectively tracking the desired handle points. This issue arises particularly in highly curved regions with a large perceptual path length, as observed in latent space <cit.>. In such cases, the optimized image undergoes drastic changes, leading to handle points in subsequent iterations being positioned outside the intended search region. Additionally, in certain scenarios, achieving satisfactory outputs necessitates the disappearance of handle points, as shown in Figure <ref>. It is important to note that during miss tracking, the cumulative error in the motion supervision step increases progressively as iterations proceed, owing to the misalignment of tracked features. Another issue that arises is ambiguous tracking, where the tracked points are situated within other regions that bear resemblance to the handle points. This predicament emerges when there are areas in the image that possess similar features to the intended handle points, leading to ambiguity in the tracking process (see Figure <ref>). This issue introduces a potential challenge as it can misguide the motion supervision process in subsequent iterations, leading to inaccurate or misleading directions.
To remedy the aforementioned issues, we propose a solution called FreeDrag, which adopts a feature-oriented approach to free the burden on point tracking within the point-oriented methodology of DragGAN. To address the miss tracking issue, we introduce a methodology where a template feature is maintained for each handle point to supervise the movements during the iterative process. This template feature is implemented as an exponential moving average feature that dynamically adjusts its weights based on the errors encountered in each iteration. By utilizing this adaptive and stable template feature, we ensure reliable point-based editing. Even when miss tracking occurs in a specific iteration, the maintained template feature remains intact, preventing the optimized image from undergoing drastic changes. To handle the ambiguous tracking issue, we propose line search and fuzzy localization. Line search restricts the movements along a specific line connecting the original handle point and the corresponding target point. This constraint effectively reduces the presence of ambiguous points and minimizes the potential misguidance of the movement direction in subsequent iterations. On the other hand, fuzzy localization alleviates the burden of precise localization, thereby enhancing the optimization process.
To summarize, our key contributions are as follows:
* We have observed that the original DragGAN approach encounters challenges in effectively addressing miss tracking and ambiguous tracking scenarios.
* We propose FreeDrag, a simple but effective interactive point-based image editing framework that incorporates adaptive template features, line search, and fuzzy localization techniques to free the burden on point tracking.
* Extensive experiments demonstrate the superiority and stability of FreeDrag in point-based image editing, marking a significant advancement in the field of flexible and precise image editing.
§ FORMULATION
Considering a latent code z drawn from the latent space 𝒵, the methodology employed by StyleGAN <cit.> involves mapping this code into the 𝒲 space utilizing a mapping network. The resulting intermediate latent code w is subsequently utilized by the synthetic network to generate the corresponding image I. The objective of this paper is to realize point-based image editing on I by optimizing the associated latent code w. Inspired by the previous work <cit.>, our approach exploits optimization techniques within the extended 𝒲^+ space. This choice is motivated by the heightened expressive potential offered by the 𝒲^+ space for conducting image editing tasks. To facilitate point-based manipulation, our framework incorporates a collection of handle points p_i along with their corresponding target points t_i, which are provided by users. Point-based editing, in this context, involves the transfer of semantic features from the handle points p_i to the target points t_i, effectively allowing users to visually "drag" these features to desired locations.
§ ANALYSIS OF DRAGGAN
The DragGAN method achieves point-based image editing through an iterative process consisting of the following two steps:
(1) Supervised motion: The method enforces the correspondence between the current handle point and its corresponding target point by ensuring the proximity of F(q_i + d_i) to F(q_i). Here, q_i represents the neighboring pixels of handle point p_i within a radius r_1 defined as Ω _1(p_i, r_1). The vector d_i is a normalized vector pointing from p_i to t_i, where t_i is the target point. The feature values F(q_i) at pixel q_i are derived using bi-linear interpolation.
(2) Point tracking: The location of the moved handle point is updated using point tracking. This is achieved by performing the nearest search in the neighborhood of the handle point, i.e., p_i:= min || F^'(q_i) -f_i ||_1. Here, q_i belongs to the neighborhood defined by Ω _2(p_i, r_2) with a radius r_2. The feature f_i represents the initial handle point's feature on the original feature map F_0, and F^'(·) denotes the features obtained from the resized feature map of StyleGAN2 <cit.>.
§.§ Instability of point-tracking
While DragGAN offers a promising solution for point-based image editing, our observations reveal that it often experiences challenges such as handle point loss, inaccurate editing, and distorted image generation in certain scenarios. We attribute these issues to the intrinsic instability of the point tracking step, which can be understood from the following two aspects:
∙ Constant Value of f_i: Throughout the entire moving process, the value f_i remains constant and fails to adequately reflect the evolving state of the handle point during its motion.
∙ Implicit Assumption of Unique Points: Point tracking assumes that there is only one point within the searching areas that inherits the feature of the handle point during each motion. However, this assumption is not always reliable. Firstly, the desired point may lie outside the searching areas due to drastic content changes, resulting in incorrect tracking (as shown in Figure <ref>). Secondly, misleading guidance can cause points to disappear, further complicating the tracking process (as seen in Figure <ref>). Additionally, the presence of points with similar features in similar or symmetrical structures (such as lips, eyes, and general contours) makes it challenging to discriminate the desired points from the searching areas, leading to ambiguous tracking (as illustrated in Figure <ref>). Moreover, choosing the radius size r_2 poses an internal conflict. On one hand, a larger r_2 enables searching the handle point from a broader image region, but on the other hand, it increases the likelihood of similar interfering pixels being included.
These factors collectively contribute to the instability observed during the point tracking process in DragGAN for point-based image editing.
§.§ Impact of unstable point tracking
When point tracking fails, the resulting searched handle point is prone to errors. This flaw significantly undermines or disrupts the point-based manipulation from two aspects: (1) Incorrect Movement Direction and Optimization Constraint: Erroneous handle points provide inaccurate movement directions (i.e., d_i) and optimization constraints (i.e., F(q_i)) during motion supervision. As a result, the quality of the final image is compromised, leading to inaccurate or distorted editing results. (2) Lack of Timely Termination Sign: In cases where point tracking fails, the absence of a reliable termination sign hampers the timely completion of the entire manipulation process. This can result in unnecessary time consumption or necessitate additional intervention, causing inconvenience and potential frustration for users.
§ METHODOLOGY
Considering the instability of point tracking, we propose a feature-oriented approach to free the burden of precise point tracking in “drag” editing, termed FreeDrag. Specifically, we introduce the concept of adaptive template features to enable the reliable recording of the handle point's feature during motion, without relying on the precise location of the handle point. By compelling the recorded feature to migrate toward an assigned point, the handle point is potentially encouraged to move to the assigned point. As a result, the handle point can progressively migrate to the corresponding target point by forcing the assigned point to approach the target point step by step.
To identify suitable assigned points for stable feature migration, we propose a fuzzy localization strategy that incorporates a customized point assignment scheme, thereby reducing the reliance on precise location information. Additionally, to alleviate the potential misguidance caused by ambiguous points, we introduce a line search strategy that intentionally confines the assigned points to lie on the line connecting the original handle point and the corresponding target point. We elaborate on the above techniques in subsequent sub-sections.
.
§.§ Adaptive Template Features
For a given original handle point p_i, we denote the corresponding features of its neighboring points within a radius r as F_r(p_i). By enforcing F_r(t_i^1) to approximate F_r(p_i), we can potentially encourage the handle point to move towards the first assigned location t_i^1. However, an immediate issue is how to obtain the features of the handle point without performing precise point tracking. It is not viable to directly adopt F_r(t_i^1) since there is no assurance that p_i will be precisely moved to t_i^1 within the limited number of steps. Therefore, we introduce the concept of adaptive template features to record the feature values of the handle point based on the quality of motion, i.e.,
F_ema^k = λ·F_r(t_i^k) + (1 - λ ) · F_ema^k-1 ,
where F_ema^0=F_r(p_i), t_i^k is the assigned location for k-th motion (t_i^0 = p_i), and λ is an adaptive coefficient that reflects the quality of motion to some extent.
The purpose of Eq. (<ref>) is to determine the extent to update the recorded feature values F_ema according to the quality of each motion. Intuitively, if the handle point is successfully moved to t_i^k, we expect F_ema^k to inherit the values of F_r(t_i^k). Otherwise, we expect F_ema^k to maintain the values of F_ema^k-1. This selective updating strategy improves the smoothness of F_ema, making it more resilient to significant content distortion and facilitating stable point movement. Denote the value of F_ema^k-1 - F_r(t_i^k)_1 at the beginning/last optimization step (one sub-motion usually consists of multiple optimization steps) are L_ini^k and L_end^k, respectively, i,e,
L_ini^k = F_ema^k-1 - F_r^ini(t_i^k)_1,
L_end^k = F_ema^k-1 - F_r^end(t_i^k)_1,
where F_r^ini(t_i^k) and F_r^end(t_i^k) denote the values of F_r(t_i^k) at the beginning/last optimization step in k-th sub-motion. For the motion towards t_i^k, we denote the expectant value of L_ini^k as l, i,e.,
l = E[ L_ini^k],
where E[ ·] denotes the expectation function.
A larger value of L_ini^k indicates a more difficult motion towards t_i^k, and a smaller value of L_end^k implies a higher quality of motion. Therefore, the adaptive coefficient λ in Eq. <ref> is defined as:
λ = (1 + exp(α· (L_end^k - β )))^ - 1,
where α and β are two positive constants, and exp(·) is the exponential function.
To prevent irreversible deviation during a single sub-motion, we impose a constraint on the maximum value of λ. Considering the following scenarios: (1) the well-optimized case where L_end^k = 0.2 · l, and we set λ=0.5; (2) the ill-optimized case where L_end^k = 0.8 · l, we set λ=0.1, we can obtain the following equations.
0.5 = (1 + exp(α·(0.2 · l - β )))^ - 1,
0.1 = (1 + exp( α·(0.8 · l - β )))^ - 1.
Thus, α = ln(9)/(0.6 · l) and β = 0.2 · l can be derived from the above equations.
§.§ Fuzzy Localization via Line Search
Given F_ema^k, which records the features of handle point after k-th sub-motion towards t_i^k, the motion supervision in the subsequent (k+1)-th sub-motion towards t_i^k+1 is formulated as follows:
ℒ_motion = F_ema^k - F_r(t_i^k+1)_1.
To find a suitable t_i^k+1 for smooth feature migration in Eq. <ref>, we perform localization based on both the motion distance and feature difference, i.e.,
t_i^k+1 = S(t_i^k,t_i, F_ema^k,d,l),
where S(·) is the localization function, t_i^k+1 is the located position, t_i is the location of the final target point, d controls the maximum distance between t_i^k+1 and the last location t_i^k, i.e., ||t_i^k+1 -t_i^k||_2 ≤ d, and l is the expectant value of feature difference at the beginning of each motion ( see Eq. <ref>). To eliminate ambiguous localization caused by similar points, S(·) performs line search, i,e, the search range is from t_i^k to t_i^k + d·t_i - t_i^k/t_i - t_i^k_2. In addition, to satisfy Eq. <ref>, the searched t_i^k+1 is forced to own the smallest ||F_r(t_i^k+1) - F_ema^k||_1.. - l_1 in the decile points of the search range.
Furthermore, to deal with the coupling movement under multiple points, we incorporate a fallback mechanism. The entire localization scheme can be expressed as follows:
t_i^k + 1 = {[ S(t_i^k,t_i,F_ema^k, d,l),
if L_end^k ≤ 0.5·l; t_i^k, elif L_end^k ≤ L_ini^k; S(t_i^k - d ·t_i - t_i^k/t_i - t_i^k_2,t_i,F_ema^k, 2d,0), otherwise ].
In the exceptional case where L_end^k > L_ini^k, we set l=0 to immediately locate the point and ensure the seamless inheritance of the features F_ema^k.
Unlike DragGAN, which relies on precise point tracking for each motion to determine the exact location of the handle point, the localization strategy described in Eq. <ref> is more flexible and fuzzy. It aims to bring the located point close to the handle point by ensuring a limited feature difference with the adaptive template feature F_ema. This approach provides a suitable gradient for each sub-motion and reduces the dependence on precise point tracking. By breaking down the overall movement towards the final location t_i into multiple sub-motions towards customized locations t_i^k, we can control the difficulty of each sub-motion and gradually approach the target location t_i.
§.§ Termination Signal
For each customized sub-motion in Eq. <ref>, the maximum optimization step is set as 5. To enhance efficiency, for each movement, we pause the optimization process if the value of Eq. <ref> for all handle points falls below 0.5· l. The final termination signal is determined by calculating the remaining distance ||t_i^k-t_i||_2. Furthermore, for each handle point, if the motion terminates, we assign its feature to F_ema to ensure it remains stationary.
§.§ Directional Editing
Given a binary mask assigned by users, the mask loss is defined as:
ℒ_mask = (F_0 - F_r(t_i^k)) ⊙ ( 1 - M) _1,
where F_0 denotes the initial feature of local patch on p_i and ⊙ is the element-wise multiplication.
§ EXPERIMENTS
We evaluate the proposed FreeDrag in various images generated by StyleGAN2. It is observed that the difficulty of point manipulation varies in different models. Thus we set different hyperparameters for specific generative models. Generally, for precise editing areas such as the human face, smaller values of d and l are suggested, and vice versa. All optimization processes are performed on the feature map with a resolution of 128×128 and we adopt the Adam <cit.> optimizer with a learning rate of 0.002.
As depicted in Fig. <ref>, the proposed FreeDrag successfully avoids the abnormal disappearance of handle point (e.g., the vanished eyes, glasses, mouth, and vehicle wheel in examples (1)-(4)), while preserving the structural integrity (e.g., avoiding the distortion of animal legs and building's roof in examples (5)-(7)), showcasing its superiority in fine-detail editing. Moreover, FreeDrag exhibits robustness in handling similar points and drastic content distortions, resulting in stable and precise point movement, as demonstrated in examples (8)-(10). Additionally, FreeDrag effectively mitigates the potential misguidance during optimization steps, leading to more natural and coherent editing results, as observed in examples (11)-(12) in Fig. <ref>.
§ DISCUSSION
Although our method enables achieving remarkable image editing results, one may challenge that the current pipeline based on generative adversarial networks may still be inevitably limited by GANs' capacity. Fortunately, our proposed FreeDrag framework is not limited to GAN-based methods. In theory, as inspired from <cit.>, if we replace the generative model with a diffusion model <cit.> and optimize the diffusion latent instead of the latent code, we can still efficiently and robustly perform interactive point-based image editing using the proposed adaptive template features mechanism and fuzzy localization technique with line search. We will explore these possibilities in future versions.
§ CONCLUSION
In this study, we propose FreeDrag, an interactive point-based image editing framework that elaborately eliminates the laborious need for unstable point tracking. By incorporating fuzzy localization equipped with line search, FreeDrag decomposes the total movement into numerous sub-motions customized from both movement distance and variation degree, facilitating a more stable point movement. Meanwhile, the concept of adaptive template features is introduced to selectively update the values of recorded features, which enables a better immunity against point missing. Extensive experiments demonstrate the superiority and stability of FreeDrag in dealing with drastic content change and similar structures, marking a significant advancement in the field of flexible and precise image editing.
ieee_fullname
|
http://arxiv.org/abs/2307.10829v2 | 20230710121818 | Exact Diffusion Inversion via Bi-directional Integration Approximation | [
"Guoqiang Zhang",
"J. P. Lewis",
"W. Bastiaan Kleijn"
] | cs.CV | [
"cs.CV"
] |
[
Michael Liut
August 12, 2023
===================
Recently, different methods have been proposed to address the inconsistency issue of DDIM inversion to enable image editing, such as EDICT <cit.> and Null-text inversion <cit.>. However, the above methods introduce considerable computational overhead. In this paper, we propose a new
technique, named bi-directional integration approximation (BDIA), to perform exact diffusion inversion with neglible computational overhead. Suppose we would like to estimate the next diffusion state z_i-1 at timestep t_i with the historical information (i,z_i) and (i+1,z_i+1). We first obtain the estimated Gaussian noise ϵ̂(z_i,i), and then apply the DDIM update procedure twice for approximating the ODE integration over the next time-slot [t_i, t_i-1] in the forward manner and the previous time-slot [t_i, t_t+1] in the backward manner. The DDIM step for the previous time-slot is used to refine the integration approximation made earlier when computing z_i. One nice property with BDIA-DDIM is that the update expression for z_i-1 is a linear combination of (z_i+1, z_i, ϵ̂(z_i,i)). This allows for exact backward computation of z_i+1 given (z_i, z_i-1), thus leading to exact diffusion inversion.
Interestingly, the update expression for z_i-1 is in fact time-symmetric in that switching the timestep t_i-1 and t_i+1 produces the inverse update expression for z_i+1 in terms of (z_i,z_i-1).
Experiments on both image reconstruction and image editing were conducted, confirming our statement.
BDIA can also be applied to improve the performance of other ODE solvers in addition to DDIM. In our work, it is found that applying BDIA to the EDM sampling procedure produces slightly better FID score over CIFAR10.
§ INTRODUCTION
As one type of generative models, diffusion probabilistic models (DPMs) have made significant progress in recent years. The pioneering work <cit.> applied non-equilibrium statistical physics to estimating probabilistic data distributions. In doing so, a Markov forward diffusion process is constructed by systematically inserting additive noise into a data sample until the data distribution is almost destroyed. The data distribution is then gradually restored by a reverse diffusion process starting from a simple parametric distribution. The main advantage of DPM over classic tractable models (e.g., HMMs, GMMs, see <cit.>) is that DPM can accurately model both the high and low likelihood regions of the data distribution by estimating a sequence of progressively less noise-perturbed data distributions. In comparison to generative adversarial networks (GANs) <cit.>, DPMs exhibit more stable training dynamics by avoiding adversarial learning, as well as showing better sample diversity.
Following the work of <cit.>, various learning and/or sampling strategies have been proposed to improve the performance of DPMs, which include, for example, denoising diffusion probabilistic models (DDPMs) <cit.>, denoising diffusion implicit models (DDIMs) <cit.>, improved DDIMs <cit.>, latent diffusion models (LDMs)<cit.>, score matching with Langevin dynamics (SMLD) <cit.>, analytic-DPMs <cit.>, optimized denoising schedules <cit.>, guided diffusion strategies <cit.>, and classifier-free guided diffusion <cit.>. It is worth noting that DDIM can be interpreted as a first-order ODE solver. As an extension of DDIM, various high-order ODE solvers have been proposed, such as EDM <cit.>, DEIS <cit.>, PNDM <cit.>, DPM-Solvers <cit.>, and IIA-EDM and IIA-DDIM <cit.>.
In recent years, image-editing via diffusion models has attracted increasing attention in both academia and industry. One important operation for editing a real image is to first perform forward process on the image to obtain the final noise representation and then perform a backward process with embedded editing to generate the desired image <cit.>. DDIM inversion has been widely used to perform the above forward and backward processes <cit.>. A major issue with DDIM inversion is that the intermediate diffusion states in the forward and backward processes may be inconsistent due to the inherent approximations (see Subsection <ref>). This issue becomes significant when utilizing classifier-free guided technique in text-to-image editing <cit.>. The newly generated images are often perceptually far away from the original ones, which is undesirable for image-editing.
Recently, two methods have been proposed to address the inconsistency issue of DDIM inversion. Specifically,
the work of <cit.> proposed a technique named null-text inversion to push the diffusion states of the backward process to be optimally close to those of the forward process via iterative optimization. The null-text inputs to the score neural network are treated as free variables in the optimization procedure.
In <cit.>, the authors proposed the EDICT technique to enforce exact DDIM inversion. Their basic idea is to introduce an auxiliary diffusion state and then perform alternating updates on the primal and auxiliary diffusion states, which is inspired by the flow generative framework <cit.>. One drawback of EDICT is that the number of neural functional evaluations (NFEs) has to be doubled in comparison to DDIM inversion (See Subsection <ref>). Another related line of research work is DDPM inversion (see <cit.>).
In this paper, we propose a new technique to enforce exact DDIM inversion with negligable computational overhead, reducing the number of NFEs required in EDICT by half. Suppose we are in a position to estimate the next diffusion state z_i-1 at timestep t_i by utilizing the two most recent states z_i and z_i+1. With the estimated Gaussian noise ϵ̂(z_i,i), we perform the DDIM update procedure twice for approximating the ODE integration over the next time-slot [t_i, t_i-1] in the forward manner and the previous time-slot [t_i,t_i+1] in the backward manner. The DDIM for the previous time-slot is employed to refine the integration approximation made earlier when computing z_i. As a result, the expression for z_i-1 becomes a linear combination of (z_i+1, z_i,ϵ̂(z_i,i)), and naturally facilitates exact diffusion inversion. We refer to the above technique as bi-directional integration approximation (BDIA). We emphasize that the obtained update expression for z_i-1 under BDIA-DDIM is time-symmetric in that switching the timestep t_i-1 and t_i+1 inverts the diffusion directions (see Section <ref> for a discussion on relevant literature). Experiments demonstrate that BDIA-DDIM produces satisfactory results on both image reconstruction and image editing. We have also applied BDIA to EDM, and found that the image qualities are also improved slightly.
§ PRELIMINARY
Forward and reverse diffusion processes:
Suppose the data sample x∈ℝ^d follows a data distribution p_data(x) with a bounded variance. A forward diffusion process progressively adds Gaussian noise to the data samples x to obtain z_t as t increases from 0 until T. The conditional distribution of z_t given x can be represented as
q_t|0(z_t|x) = 𝒩(z_t|α_tx, σ_t^2I) z_t = α_tx+σ_t ϵ,
where α_t and σ_t are assumed to be differentiable functions of t with bounded derivatives. We use q(z_t; α_t,σ_t) to denote the marginal distribution of z_t. The samples of the distribution q(z_T;α_T,σ_T) should be practically indistinguishable from pure Gaussian noise if σ_T ≫α_T.
The reverse process of a diffusion model firstly draws a sample z_T from 𝒩(0, σ_T^2I), and then progressively denoises it to obtain a sequence of diffusion states {z_t_i∼ p(z;α_t_i,σ_t_i)}_i=0^N,
where we use the notation p(·) to indicate that reverse sample distribution might not be identical to the forward distribution q(·) because of practical approximations. It is expected that the final sample z_t_0 is roughly distributed according to p_data(x), i.e., p_data(x)≈ p(z_t_0;α_t_0,σ_t_0) where t_0=0.
ODE formulation: In <cit.>, Song et al. present a so-called probability flow ODE which shares the same marginal distributions as z_t in (<ref>). Specifically, with the formulation (<ref>) for a forward diffusion process, its reverse ODE form can be represented as
dz = [f(t)z_t-1/2g^2(t)∇_zlog q(z_t; α_t,σ_t)]_d(z_t, t)dt,
where d(z_t,t) denotes the gradient vector at time t, and the two functions f(t) and g(t) are represented in terms of (α_t, σ_t) as
f(t) = dlogα_t/dt, g^2(t)=dσ_t^2/dt-2dlogα_t/dtσ_t^2.
∇_zlog q(z;α_t,σ_t) in (<ref>) is the score function <cit.> pointing towards higher density of data samples at the given noise level (α_t,σ_t). One nice property of the score function is that it does not depend on the generally intractable normalization constant of the underlying density function q(z;α_t,σ_t).
As t increases, the probability flow ODE (<ref>) continuously reduces the noise level of the data samples in the reverse process. In the ideal scenario where no approximations are introduced in (<ref>), the sample distribution p(z;α_t,σ_t) approaches p_data(x) as t goes from T to 0. As a result, the sampling process of a diffusion model boils down to solving the ODE form (<ref>), where randomness is only introduced in the initial sample at time T. This has opened up the research opportunity of exploiting different ODE solvers in diffusion-based sampling processes.
Denoising score matching: To be able to utilize (<ref>) for sampling, one needs to specify a particular form of the score function ∇_zlog q(z;α_t,σ_t). One common approach is to train a noise estimator ϵ̂_θ by minimizing the expected L_2 error for samples drawn from q_data (see <cit.>):
𝔼_x∼ p_data𝔼_ϵ∼𝒩(0, σ_t^2I)ϵ̂_θ(α_t x+σ_tϵ,t)-ϵ_2^2,
where (α_t, σ_t) are from the forward process (<ref>). The common practice in diffusion models is to utilize a neural network of U-Net architecture <cit.> to represent the noise estimator ϵ̂_θ. With (<ref>), the score function can then be represented in terms of
ϵ̂_θ(z_t; t) as (see also (229) of <cit.>)
∇_zlog q(z_t;α_t,σ_t) =-(z_t-α_t x)/σ_t^2 = -ϵ̂_θ(z_t; t)/σ_t.
Alternatively, the score function can be represented in terms of an estimator for x (see <cit.>). The functional form for the noise level (α_t,σ_t) also plays an important role in the sampling quality in practice. For example, the setup (α_t,σ_t)=(1,√(t)) was studied in <cit.>, which corresponds to constant-speed heat diffusion. The recent work <cit.> found that a simple form of (α_t,σ_t)=(1,t) works well in practice.
§ BI-DIRECTIONAL INTEGRATION APPROXIMATION (BDIA) FOR DDIM
In this section, we first review DDIM inversion and EDICT as an extension of DDIM inversion. We then
present our BDIA technique to enable exact diffusion inversion.
§.§ Review of DDIM inversion
We first consider the update expression of DDIM for sampling, which is in fact a first-order solver for the ODE formulation (<ref>)-(<ref>) (see <cit.>), given by
z_i-1= α_i-1(z_i -σ_iϵ̂_θ(z_i, i) /α_i)+σ_i-1ϵ̂_θ(z_i, i)
= a_i z_i +b_iϵ̂_θ(z_i, i)
≈ z_i+∫_t_i^t_i-1d(z_τ,τ)dτ,
where a_i=α_i-1/α_i and b_i=σ_i-1-σ_iα_i-1/α_i. It is clear from (<ref>)-(<ref>) that the integration ∫_t_i^t_i-1d(z_τ,τ)dτ is approximated by the forward DDIM update. That is, only the diffusion state z_i at the starting timestep t_i is used in the integration approximation.
To perform DDIM inversion, z_i can be approximated in terms of z_i-1 as
z_i =α_i(z_i-1-σ_i-1ϵ̂_θ(z_i,i)/α_i-1)+σ_iϵ̂_θ(z_i,i)
≈α_i(z_i-1-σ_i-1ϵ̂_θ(z_i-1,i)/α_i-1)+σ_iϵ̂_θ(z_i-1,i),
where z_i in the RHS of (<ref>) is replaced with z_i-1 to facilitate explicit computation. This naturally introduces approximation errors, leading to inconsistency of the diffusion states between the forward and backward processes.
§.§ Review of EDICT for exact diffusion inversion
Inspired by the flow generative framework <cit.>, the recent work <cit.> proposed EDICT to enforce exact diffusion inversion. The basic idea is to introduce an auxiliary diffusion state y_i to be coupled with z_i at every timestep i. The next pair of diffusion states (z_i-1, y_i-1) is then computed in an alternating fashion as
z_i^inter = a_iz_i + b_iϵ_θ(y_i,i)
y_i^inter = a_iy_i + b_iϵ_θ(z_i^inter,i)
z_i-1 = pz_i^inter+(1-p)y_i^inter
y_i-1 = py_i^inter+(1-p)z_i-1,
where p∈ [0,1] is the weighting factor in the mixing operations and the pair (z_i^inter, y_i^inter) represents the intermediate diffusion states. According to <cit.>, the two mixing operations (<ref>)-(<ref>) are introduced to make the update procedure stable.
Due to the alternating update formalism in (<ref>)-(<ref>),
the computation can be inverted to obtain (z_i, y_i) in terms of (z_i-1, y_i-1) as
y_i^inter = (y_i-1-(1-p)z_i-1)/p
z_i^inter = (z_i-1-(1-p)y_i^inter)/p
y_i = (y_i^inter - b_iϵ_θ(z_i^inter,i))/a_i
x_i = (z_i^inter - b_iϵ_θ(y_i,i)/a_i
Unlike (<ref>)-(<ref>), the inversion of (<ref>)-(<ref>) does not involve any approximation, thus enabling exact diffusion inversion.
Finally, it is clear from the above equations that the NFE that EDICT has to perform is two times the NFE required for DDIM. This makes the method computationally expensive in practice. It is highly desirable to reduce the NFE in EDICT while retaining exact diffusion inversion. We provide such a method in the next subsection.
§.§ BDIA-DDIM for exact diffusion inversion
Reformulation of DDIM update expression: In this section, we present our new technique BDIA to assist DDIM in achieving exact diffusion inversion. To do so, we first reformulate the update expression for z_i-1 in (<ref>) in terms of all the historical diffusion states {z_j}_j=N^i as
z_i-1 =z_N+∑_j=N^iΔ(t_j→ t_j-1|z_j)
≈z_N+∑_j=N^i∫_t_j^t_j-1d(z_τ, τ)dτ ,
where we use Δ(t_j→ t_j-1|z_j) to denote approximation of the integration ∫_t_j^t_j-1d(z_τ,τ)dτ via the forward DDIM step, given by
Δ(t_j→ t_j-1|z_j) =z_j-1 - z_j
=a_jz_j + b_jϵ̂_θ(z_j,j)-z_j.
Replacing forward DDIM by backward DDIM: We argue that, in principle, the integration ∫_t_j^t_j-1d(z_τ,τ)dτ in (<ref>) can be alternatively approximated by the backward DDIM update, expressed as
∫_t_j^t_j-1d(z_τ,τ)dτ≈ - Δ(t_j-1→ t_j|z_j-1),
where the notation Δ(t_j-1→ t_j|z_j-1) denotes the backward DDIM step from t_j-1 to t_j. The minus sign in front of Δ(t_j-1→ t_j|z_j-1) is due to integration over reverse time. The update expression for the backward DDIM step can be represented as
Δ(t_j-1→ t_j|z_j-1) =z_j - z_j-1
=α_j(z_j-1-σ_j-1ϵ̂_θ(z_j-1, j-1) /α_j-1)+σ_jϵ̂_θ(z_j-1, j-1) -z_j-1
=z_j-1/a_j - b_j/a_jϵ̂_θ(z_j-1,j-1) - z_j-1.
It is noted that in practice, we first need to perform a forward DDIM step over [t_j,t_j-1] to obtain z_j-1, and then we are able to perform the backward DDIM step computing Δ(t_j-1→ t_j|z_j-1).
Bi-directional integration approximation (BDIA):
We now present our new BDIA technique. Our primary goal is to develop an update expression for each z_i-1 as a linear combination of (z_i+1, z_i,ϵ̂_θ(z_i,i)). As will be explained in the following, the summation of the integrations ∑_j=N^i∫_t_j^t_j-1d(z_τ,τ)dτ for z_i-1 will involve both forward DDIM updates and backward DDIM updates.
Suppose we are at the initial time step t_N with state z_N. Then the next state z_N-1 is computed by applying the forward DDIM (see (<ref>)):
z_N-1 = a_Nz_N +b_Nϵ̂_θ(z_N, N)
=z_N + Δ(t_N→ t_N-1|z_N).
Upon obtaining z_N-1, we are able to compute Δ(t_N-1→ t_N|z_N-1) over the previous time-slot [t_N-1, t_N] and Δ(t_N-1→ t_N-2|z_N-1) over the next time-slot [t_N-1, t_N-2]. Consequently, the integration ∫_t_N^t_N-1d(z_τ,τ)dτ can be approximated by -Δ(t_N-1→ t_N|z_N-1). We define the update for z_i-1 for i≤ N-1 as below:
When i≤ N-1, let the diffusion state z_i-1 be computed in terms of (z_i, z_i+1) as
z_i-1 = z_i+1 + [a_iz_i+ b_iϵ̂_θ(z_i, i)]-(z_i/a_i+1-b_i+1/a_i+1ϵ̂_θ(z_i,i))
=z_i+1-Δ(t_i→ t_i+1|z_i) + Δ(t_i→ t_i-1|z_i).
We can conclude from (<ref>) that in the computation of each z_i-1, the integration for the most recent time-slot [t_i, t_i-1] is approximated by a forward DDIM update, and the integration for the second most recent time-slot [t_i+1, t_i] is approximated by a backward DDIM update. Fig. <ref> demonstrates how the entire integration ∫_t_N^t_i-1d(z_τ,τ)dτ for different z_i-1 is approximated. It can be seen from the figure that the directions of the integration approximation for neighbouring time-slots are always opposite. In other words, the forward and backward DDIM updates are interlaced over the set of time-slots {(t_j, t_j-1)}_j=N^i for each z_i-1. We summarize the results in a proposition below:
Let z_N-1 and {z_i| i≤ N-2} be computed by following (<ref>) and (<ref>) sequentially. Then for each timestep i≤ N-2, z_i can be represented in the form of
z_i = z_N + Δ(t_N-t_N-1|z_N)mod(N-j, 2)
+ ∑_j=i+1^N-1(-Δ(t_j→ t_j+1|z_j)+Δ(t_j→ t_j-1|z_j))mod(j-i,2).
BDIA-DDIM inversion: Whereas the conventional DDIM inversion (<ref>) requires the approximation z_i-1≈z_i, which is only true in the limit of infinite steps,
the formulation (<ref>) allows exact inversion (up to floating point error). Note that (<ref>) is symmetric in time: switching the timestep t_i+1 and t_i-1 in (<ref>) inverts the diffusion direction. That is, it follows from (<ref>) that the diffusion state z_i+1 can be computed in terms of (z_i, z_i-1) as
z_i+1 = z_i-1 + Δ(t_i→ t_i+1|z_i) - Δ(t_i→ t_i-1|z_i)
=
z_i-1 - [a_iz_i+ b_iϵ̂_θ(z_i, i)]+ (z_i/a_i+1-b_i+1/a_i+1ϵ̂_θ(z_i,i)).
We summarize the above property of time-symmetry in a lemma below:
Switching the timestep t_i-1 and t_i+1 in (<ref>) produces the reverse update (<ref>), and vice versa.
Finally, similarly to the computation (<ref>), EDICT also does not involve any approximation and results in exact diffusion inversion.
However, in contrast to EDICT, (<ref>) does not require a doubling of the NFE.
§ RELATED WORKS
In the literature, there is a branch of research on development of time-reversible ODE solvers. For instance, Verlet integration was a time-reversible method for solving 2nd-order ODEs <cit.>. Leapfrog integration is another time-reversible method also developed for solving 2nd-order ODEs <cit.>.
§ EXPERIMENTS
We conducted two types of experiments: (1) evaluation of image sampling for both BDIA-DDIM and BDIA-EDM; (2) image-editing via BDIA-DDIM. It was found that our new technique BDIA produces promising results for both tasks.
§.§ Evaluation of image sampling
In the first experiment, we consider the task of image sampling. The tested pre-trained models can be found in Appendix <ref>. Given a pre-trained model, 50K artificial images were generated for a particular NFE, and the corresponding FID score was computed.
Table <ref> and <ref> summarize the computed FID scores. It is clear that by incorporating BDIA into both DDIM and EDM, the FID scores are improved. This can be explained by the fact that BDIA introduces the additional backward integration approximation per time-step in the sampling process. This makes the resulting final integration approximation become more accurate.
§.§ Evaluation of image-editing
In this second experiment, we evaluated BDIA-DDIM for image-editing by utilizing the open-source repository of EDICT[<https://github.com/salesforce/EDICT>]. Fig. <ref> visualizes the obtained results. We point out that BDIA-DDIM produces very similar results to EDICT while reducing by approximately half the NFE compared to EDICT.
§ CONCLUSIONS
In this paper, we have proposed a new technique BDIA, to assist DDIM in achieving exact diffusion inversion. The key step of BDIA-DDIM is to perform DDIM update procedure twice at each time step t_i: one over the previous time-slot [t_i, t_i+1] and the other over next time-slot [t_i,t_i-1] in computing z_i-1. By doing so, the expression for z_i-1 becomes a linear combination of (z_i, ϵ̂_θ(z_i,i), z_i+1) that is symmetric in time. As a result, z_i+1 can be computed exactly as a linear function of (z_i, ϵ̂_θ(z_i,i), z_i-1), enabling exact diffusion inversion. Note that although the DDIM update is evaluated twice at each step, this is inexpensive since the costly neural functional evaluation is performed only once.
10
Arjovsky17WGAN
M. Arjovsky, S. Chintala, and L. Bottou.
Wasserstein GAN.
arXiv:1701.07875 [stat.ML], 2017.
Bao22DPM_cov
F. Bao, C. Li, J. Sun, J. Zhu, and B. Zhang.
Estimating the Optimal Covariance with Imperfect Mean in Diffusion
Probabilistic Models.
In ICML, 2022.
Bao22DPM
F. Bao, C. Li, J. Zhu, and B. Zhang.
Analytic-DPM: an Analytic Estimate of the Optimal Reverse Variance
in Diffusion Probabilistic Models.
In ICLR, 2022.
Bishop06
C. M. Bishop.
Pattern Recognition and Machine Learning.
Springer, 2006.
Chen20WaveGrad
N. Chen, Y. Zhang, H. Zen, R. J. Weiss, M. Norouzi, and W. Chan.
WaveGrad: Estimating Gradients for Waveform Generation.
arXiv:2009.00713, September 2020.
Dhariwal21DPM
P. Dhariwal and A. Nichol.
Diffusion models beat gans on image synthesis.
arXiv:2105.05233 [cs.LG], 2021.
Dinh14Nice
L. Dinh, D. Krueger, and Y. Bengio.
Nice: Non-linear independent components estimation.
arXiv preprint arXiv:1410.8516, 2014.
Dinh16DensityEsti
L. Dinh, J. Sohl-Dickstein, and S. Bengio.
Density estimation using real nvp.
arXiv preprint arXiv:1605.08803, 2016.
Goodfellow14GAN
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair,
A. Courville, and Y. Bengio.
Generative Adversarial Nets.
In Proceedings of the International Conference on Neural
Information Processing Systems, pages 2672–2680, 2014.
Gulrajani17WGANGP
I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville.
Improved training of wasserstein gans.
In Advances in neural information processing systems, pages
5767–5777, 2017.
Ho20DDPM
J. Ho, A. Jain, and P. Abbeel.
Denoising diffusion probabilistic models.
In NeurIPS, 2020.
Ho22ClassiferFreeGuide
J. Ho and T. Salimans.
Classifier-free diffusion guidance.
arXiv preprint arXiv:2207.12598, 2022.
Huberman23DDPMInversion
I. Huberman-Spiegelglas, V. Kulikov, and T. Michaeli.
An Edit Friendly DDPM Noise Space: Inversion and Manipulations.
arXiv:2304.06140v2 [cs.CV], 2023.
Hyvarinen05ScoreMatching
A. Hyvarinen.
Estimation of non-normalized statistical models by score matching.
Journal of Machine Learning Research, 24:695–709, 2005.
Karras22EDM
T. Karras, M. Aittala, T. Alia, and S. Laine.
Elucidating the Design Space of Diffusion-Based Generative Models.
In 36th Conference on Nueral Information Processing Systems
(NeurIPS), 2022.
Kim22GuidedDiffusion
D. Kim, Y. Kim, S. J. Kwon, W. Kang, and I.-C. Moon.
Refining Generative Process with Discriminator Guidance in
Score-based Diffusion Models.
arXiv preprint arXiv:2211.17091 [cs.CV], 2022.
Kingma18Glow
D. P. Kingma and P. Dhariwal.
Glow: Generative flow with invertible 1x1 convolutions.
In Advances in neural information processing systems, 2018.
Kingma21DDPM
D. P. Kingma, T. Salimans, B. Poole, and J. Ho.
Variational diffusion models.
arXiv: preprint arXiv:2107.00630, 2021.
Lam22BDDM
M. W. Y. Lam, J. Wang, D. Su, and D. Yu.
BDDM: Bilateral Denoising Diffusion Models for Fast and High-Quality
Speech Synthesis.
In ICLR, 2022.
Liu22PNDM
L. Liu, Y. Ren, Z. Lin, and Z. Zhao.
Pseudo Numerical Methods for Diffusion Models on Manifolds.
In ICLR, 2022.
Lu22DPM_Solver
C. Lu, Y. Zhou, F. Bao, J. Chen, C. Li, and J. Zhu.
DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Sampling
in Around 10 Steps.
In NeurIPS, 2022.
Mokady23NullTestInv
R. Mokady, A. Hertz, K. Aberman, Y. Pritch, and D. Cohen-Or.
Null-text Inversion for Editing Real Images using Guided Diffusion
Models.
In CVPR, 2023.
Nichol21DDPM
A. Nichol and P. Dhariwal.
Improved denoising diffusion probabilistic models.
arXiv preprint arXiv:2102.09672, 2021.
Nichol22GLIDE
A. Nichol, P. Dharwal, A. Ramesh, P. Shyam, P. Mishkin, B. McGrew,
I. Sutskever, and M. Chen.
GLIDE: Towards Photorealistic image generation and editing with
text-guided diffusion models.
In ICML, 2022.
Rombach22LDM
R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer.
High-resolution image synthesis with latent diffusion models.
In CVPR, 2022.
Rombach22StableDiffusion
R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer.
On High-resolution image synthesis with latent diffusion models.
In CVPR, page 10684–10695, 2022.
Ronneberger15Unet
O. Ronneberger, P. Fischer, and T. Brox.
U-Net: Convolutional Networks for Biomedical Image Segmentation.
arXiv:1505.04597 [cs.CV], 2015.
Saharia22Imagen
C. Saharia, W. Chan, S. Saxena, L. Li, J. Whang, E. Denton, S.-K.-S.
Ghasemipour, B.-K. Ayan, S. S. Mahdavi, R.-G. Lopes, T. Salimans, J. Ho,
D. J. Fleet, and M. Norouzi.
Photorealistic text-to-image diffusion models with deep language
understanding.
arXiv preprint arXiv:2205.11487, 2022.
Sauer22StyleGAN
A. Sauer, K. Schwarz, and A. Geiger.
StyleGAN-XL: Scaling StyleGAN to large diverse datasets.
In SIGGRAPH, 2022.
Shi23DragDiffusion
Y. Shi, C. Xue, J. Pan, and W. Zhang.
DragDiffusion: Harnessing Diffusion Models for Interactive
Point-based Image Editing.
arXiv:2306.14435v2, 2023.
Dickstein15DPM
J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli.
Deep unsupervised learning using nonequilibrium thermodynamics.
ICML, 2015.
Song21DDIM
J. Song, C. Meng, and S. Ermon.
Denoising Diffusion Implicit Models.
In ICLR, 2021.
Song21DPM
Y. Song, C. Durkan, I. Murray, and S. Ermon.
Maximum likelihood training of score-based diffusion models.
In Advances in neural information processing systems (NeurIPS),
2021.
Song19
Y. Song and S. Ermon.
Generative modeling by estimating gradients of the data
distribution.
In Advances in neural information processing systems (NeurIPS),
page 11895–11907, 2019.
Song21SDE_gen
Y. Song, J. S.-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole.
Score-Based Generative Modeling Through Stochastic Differential
Equations.
In ICLR, 2021.
Wallace23EDICT
B. Wallace, A. Gokul, and N. Naik.
EDICT: Exact Diffusion Inversion via Coupled Transformations.
In CVPR, 2023.
Verlet67VerletInt
L. Verlet.
Computer Experiments on Classical Fluids. I. Thermodynamical Properties of Lennard-Jones Molecules.
Physical Review, 159:98–103, 1967.
Skeel93leapfrog
R. D. Skeel.
Variable Step Size Destabilizes the Stamer/Leapfrog/Verlet Method.
BIT Numerical Mathematics, 33:172–175, 1993.
GuoqiangIIA23
G. Zhang, K. Niwa, and W. B. Kleijn.
On Accelerating Diffusion-Based Sampling Processes by Improved
Integration Approximation.
arXiv:2304.11328 [cs.LG], 2023.
Zhang22DEIS
Q. Zhang and Y. Chenu.
Fast Sampling of Diffusion Models with Exponential Integrator.
arXiv:2204.13902 [cs.LG], 2022.
§ EXTENSION OF THE UPDATE PROCEDURE OF (<REF>)
As an extension of (<ref>), we can also compute z_i-1 by the update below:
When i≤ N-1, let the diffusion state z_i-1 be computed in terms of (z_i, z_i+1) as
z_i-1 = γ(z_i+1-z_i) + [a_iz_i+ b_iϵ̂_θ(z_i, i)]-γ(z_i/a_i+1-b_i+1/a_i+1ϵ̂_θ(z_i,i)-z_i)
=z_i+γ(z_i+1-z_i)-γΔ(t_i→ t_i+1|z_i) + Δ(t_i→ t_i-1|z_i),
where γ∈ [0,1].
§ TESTED PRE-TRAINED MODELS FOR BDIA-DDIM AND BDIA-EDM
|
http://arxiv.org/abs/2307.04133v1 | 20230709091532 | Ultrasonic Image's Annotation Removal: A Self-supervised Noise2Noise Approach | [
"Yuanheng Zhang",
"Nan Jiang",
"Zhaoheng Xie",
"Junying Cao",
"Yueyang Teng"
] | eess.IV | [
"eess.IV",
"cs.CV"
] |
IEEE Transactions on Computational Imaging
Zhang, Jianget al.: Ultrasonic Image's Body Marker Annotation Removal: A Noise2Noise Approach
Ultrasonic Image's Annotation Removal: A Self-supervised Noise2Noise Approach
Yuanheng Zhang,
Nan Jiang,
Zhaoheng Xie,
Junying Cao*,
Yueyang Teng*
Y. Zhang is with the College of Medicine and Biological Information Engineering, Northeastern University, China.
N. Jiang is with the Department of Ultrasound, General Hospital of Northern Theater Command, China.
Z. Xie is with the Institute of Medical Technology, Peking University, China.
J. Cao is with the Department of Ultrasound, General Hospital of Northern Theater Command, China.
Y. Teng is with the College of Medicine and Biological Information Engineering, Northeastern University, China.
J. Cao and Y. Teng contributed equally to this work.
This work is supported by the Natural Science Foundation of Liaoning Province (2022-MS-114).
This work is supported by the Key R&D Plan Projects of Liaoning Province in 2020 (Project No. 2020JH2/10300122).
August 12, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Accurately annotated ultrasonic images are vital components of a high-quality medical report.
Hospitals often have strict guidelines on the types of annotations that should appear on imaging results.
However, manually inspecting these images can be a cumbersome task.
While a neural network could potentially automate the process, training such a model typically requires a dataset of paired input and target images, which in turn involves significant human labour.
This study introduces an automated approach for detecting annotations in images.
This is achieved by treating the annotations as noise, creating a self-supervised pretext task and using a model trained under the Noise2Noise scheme to restore the image to a clean state.
We tested a variety of model structures on the denoising task against different types of annotation, including body marker annotation, radial line annotation, etc.
Our results demonstrate that most models trained under the Noise2Noise scheme outperformed their counterparts trained with noisy-clean data pairs.
The costumed U-Net yielded the most optimal outcome on the body marker annotation dataset, with high scores on segmentation precision and reconstruction similarity.
We released our code at <https://github.com/GrandArth/UltrasonicImage-N2N-Approach>.
Image Restoration,
Noise2Noise,
Segmentation,
U-Net,
Ultrasonic.
§ INTRODUCTION
Annotations, typically comprised of various labels and marks, are commonly utilized to record critical information from an ultrasonic exam,
including the precise location of potential lesions or suspicious findings, on archived results.
Such annotations prove beneficial in aiding physicians in interpreting the exam results,
particularly when surrounding structures do not provide any indication of the anatomic location of the image.
Additionally, hospitals often mandate the inclusion of annotations, especially in cases involving inter-hospital patient transfers <cit.>.
If the report does not have comprehensive annotations, patients are usually required to undergo an equivalent radiography exam at the facility of transfer.
Commonly employed types of annotations include body marker annotation <cit.>, radial line annotation, and vascular flow annotation.
The presence of these annotations serves as evidence for the standardization of the diagnostic process. Annotations not only document the reasoning behind the diagnostic assessment but also facilitate comparison between pre- and post-treatment imaging findings to gain further insight into the patient's condition.
However, the utilization of annotations during ultrasound exams may vary depending on the proficiency of the sonographer performing the procedure.
Ultrasound being a live examination makes it hard to implement additional reviews, thereby relying solely on the expertise of the operator to determine the presence of annotations.
Furthermore, the need for repetitive manual verification increases the likelihood of forgetting the task, particularly during busy schedules at hospitals.
As such, it is possible for the absence of annotations to occur.
Given the strict regulations and obvious beneficiation surrounding the need for annotations in medical imaging,
sonographers need to manually validate that the stored data satisfies these requirements to ensure that diagnoses meet the standard continuously.
However, this is a cognitively demanding undertaking as it entails the fulfillment of diverse annotation obligations tailored to specific image outcomes.
In addition, dealing with archived files manually is a cumbersome task as most medical data management systems do not consider this necessary and have no relevant feature implemented.
The utilization of neural networks for the automatic assessment of whether the stored data meets particular criteria is a logical approach.
To address the current issue, there are several approaches that can be taken using different types of deep learning models.
The first approach would involve treating the task as a semantic segmentation problem, where the goal is to classify each pixel in the image into one of several predefined categories.
Alternatively, the task could be framed as an instance segmentation problem, where the aim is to identify and label individual objects within the scene.
In order to accomplish these goals, attention-based models such as the Pyramid Attention Network <cit.> or the Reverse Attention Network <cit.> could be employed. Alternatively, generative models like variants of Generative Adversarial Networks (GANs) <cit.> are also viable.
Once the segmentation has been completed, the resulting labels could then be used to determine whether the image meets regulatory requirements or not.
This task could also be viewed as an object recognition challenge, and for this purpose,
models such as Single Shot MultiBox Detector (SSD) <cit.> or You Only Look Once (YOLO) <cit.> could be utilized to obtain the four coordinates of the bonding box of a detected object, which will serve as demonstrative evidence of the necessary annotations.
In order to train a model using deep learning, it is important to have a suitable training dataset that includes paired input and output data, regardless of the specific task being performed.
However, building an appropriate training dataset is a challenging task due to the absence of high-quality data such as segmentation masks, object coordinates and clean targets.
Acquiring such data requires a considerable amount of manual effort.
In this study, we introduced a self-supervised Noise2Noise approach to recognise annotations without needing a pairwise dataset by manually superposing common annotations onto a small set of unannotated images randomly and repeatedly.
We trained multiple network structures such as FCN, U-Net++, MultiResUNet, etc., for Noise2Noise to select an ideal one.
We noted that the majority of Noise2Noise based methods surpassed the corresponding
Noise2Clean (supervised learning) methods in which the former even receive a Sørensen-Dice coefficient (Dice) increases of up to 300%, an Intersection over Union (IoU) increase of up to 384%, and a Peak Signal to Noise Ratio Human Visual System Modified (PSNR_HVS_M) increase of up to 38% in some cases.
Among them our costumed U-Net achieved the best results, both quantitative and qualitatively.
The remainder of the paper is organized as follows:
Section <ref> discusses related works.
Section <ref> outlines our methodology, data sources, dataset building pipeline and model strcutures used in this work.
In Section <ref>,
quantitative metric scores and qualitative image results are provided to support our claim regarding the optimal model structure, loss function and observations on Noise2Noise's effect.
Finally, Section <ref> concludes the paper.
§ RELATED WORKS
§.§ Self-supervised Learning
Self-supervised learning is a way of training deep-learning models without human guidance or explicit instructions.
Unlike supervised learning which uses labeled examples, self-supervised models learn from unlabeled data by identifying patterns and relationships on their own.
It uses the structure of images (e.g., edges, shapes) to teach the deep-learning model how to identify important parts of an image automatically, rather than having to be explicitly told what to look for.
This is particularly helpful considering the abundance of unlabeled data that exists today and the amount of work required to create a properly constructed dataset.
To create a robust, large model, self-supervised learning is an essential tool.
The general process of self-supervised learning involves first creating a pretext task for the model to solve. By completing this task, the model can gain an understanding of the structural information embedded within the data. This understanding can then be transferred to downstream tasks using different forms of transfer learning.
Examples of pretext tasks include rotating an image for the model to predict the degree of rotation, reconstructing images from an altered view, or reconstructing images from a corrupted version of the original data.
In this work, we developed a pretext task where we asked the model to generate another noisy image from the noisy input while keeping the same original clean image beneath it.
Specifically, we manually extracted several common annotations from stored data and randomly superimposed them on a small set of unannotated images to create a large dataset.
The idea behind this approach was to train the model to recognize the crucial features of the original so that it could distinguish between noise and clean images.
§.§ Noise2Noise Training Scheme
Noise2Noise is originally proposed in <cit.> as a novel statistical reasoning for the task of image denoising.
It is shown that, under certain key constraints,
it is possible to train a denoising model using only corrupted images.
The constraints are: the distribution of the added noise must have a mean of zero and no correlation with the desired clean image,
and the correlation between the noise in the input image and the target image should be close to zero <cit.>.
By utilizing deep learning, a denoising task can be transformed into a regression problem,
where a neural network is used to learn the mapping between corrupted samples x̂_i and clean samples y_i by minimizing the empirical risk <cit.>
In <cit.>, inspecting the form of a typical training process shows that training a neural network is a generalization of a point estimating problem.
We can see that it is essentially solving the point estimating problem for each separate input.
This means, by finding the optimal parameters, the trained neural network will output the expectation or median of all possible mapping for input x.
This property often leads to unwanted fuzziness in many deep-learning applications.
However, in a denoising scenario, when the noise satisfies the above constraints and exists in both the model input and training target, the task of empirical risk minimization, given infinite data,
θargmin∑_i L(f_θ(x̂_i), ŷ_i)
is equivalent to the original regression problem
θargmin∑_i L(f_θ(x̂_i), y_i)
where f_θ(x) is the model parameterized by θ, L is loss function, x̂_i,ŷ_i are samples drawn from a noisy distribution and y_i representing clean samples.
The idea of using the self-supervised learning in conjunction with Noise2Noise training scheme aligns well with our goal of obtaining a clean image.
With a clean image, we can easily produce a segmentation map for various kinds of annotations, facilitating the models to recognise and categorize them accurately.
§ METHODOLOGY
Initially, our data includes collections of information that may or may not have specific annotations.
We manually examined and filtered the data to create a clean dataset for each annotation.
Next, we studied the individual components of different annotations and identified a general pattern for each one.
Using this pattern, we generated large datasets containing noisy data and trained a denoising model using the Noise2Noise approach.
Finally, we trained various model structures using both the Noise2Noise and conventional Noise2Clean techniques to obtain denoising models for the purpose of performance comparison.
§.§ Dataset
To manually synthesize a self-supervised Noise2Noise dataset, which our training requires,
it is essential to know the scheme of the different annotations and to construct a dataset according to it.
Our original data consists mainly of ultrasonic images provided by the General Hospital of Northern Theater Command.
These images were captured using external video capture cards and are in 8-bit sRGB format.
According to the type of noise, we divided these data into six categories:
* Images with body marker annotation
* Images without body marker annotation
* Images with radial line annotation
* Images without radial line annotation
* Images with vascular flow annotation
* Images without vascular flow annotation
Images with certain annotations are considered noisy images in the context of the noise removal task, and corresponding images without these annotations are considered clean.
Some typical images with various annotation are provided in Fig. <ref>.
To safeguard the confidentiality of the patient, any personal data displayed in the margin of the image is blurred using pixelization. This same technique is also used to obscure any similar information present in other images.
In essence, a body marker annotation is a marker selected from a fixed set of icons that indicates different regions of the human body and its current orientation.
It is typically located at the edge of the ultrasnoic image area and is labeled by the sonographer.
On some ultrasound machines, the body marker annotation has a fixed position.
However, from a statistical and training perspective, each real instance can be viewed as an image sample from a conditional distribution where the condition is the body marker annotation's location.
By randomly placing body marker annotation at any position within the image, we draw samples from a distribution without the aforementioned condition.
By learning to denoise samples from the unconditioned distribution, the model can effectively denoise samples from conditional distribution as well.
Other commonly used annotations that we introduced later comply with the same reasoning.
The radial line annotation is pairs of connected cross markers.
They are usually placed at the edge of the lesion area, with its placement determined by the size of the lesion.
One to three pairs of cross markers may be present in an image, corresponding to the three axes of 3D space, but typically there are only two pairs.
The vascular flow annotation is not an additional labeling feature meant to simplify identification.
Rather, it serves as a bounding box that identifies the specific area of the image being examined by the ultrasound flowmeter.
However, to keep things simple, we will continue to call it a form of annotation.
The presence of this annotation indicates that the relevant examination has been conducted.
To synthesize a Noise2Noise training dataset for above annotations,
we first manually extracted the necessary annotation icons from existing annotated data,
then we randomly overlay different annotations on the clean images we have.
The randomness of the noise overlay allows for the creation of a relatively large dataset.
By constructing training datasets in the above-mentioned process, each noisy image has three corresponding images for different tasks.
* A clean image which the noisy image originated from.
* A different noisy image created from the same clean image, using a different (in terms of position, form, etc.) noise sampled from the same distribution.
* A binary image recorded the position and form of the noise appended to the clean image.
An instance of the training dataset is presented in Fig. <ref>.
Using these images, the same dataset can be used for Noise2Noise training, conventional Noise2Clean training, and normal segmentation training.
Our approach to create this training dataset can minimize the amount of human labor required. Even with a limited amount of clean data, we are able to generate a large noisy dataset for training. The flow chart of the above process is also shown in Fig. <ref>.
§.§ Network Structures
In this research, we trained several structures to find the optimal solution and compare the two different training schemes: Noise2Noise and traditional Noise2Clean.
We adopted most of the structures from the traditional image segmentation model.
The models we adopted include FCN, DeepLabv3, LinkNet, MANet, U-Net Plus Plus, MultiResUNet and a costumed U-Net.
FCN is one of the models utilizing convolutional networks in semantic segmentation.
<cit.> uses fully-convolutional layers instead of fully-connected layers so that this model is compatible with non-fixed sized input and ouputs.
DeepLabv3 is a subsequent model of the DeepLab model famlily, developed by <cit.>.
The main feature of this model is the use of dilated convolution, also known as “atrous” convolution.
This method is advocated to combat the issue of feature resolution reduction in deep convolutional networks (due to pooling operations and strides in convolution operations) and the difficulties in multi-scale segmentation.
LinkNet is proposed by <cit.> to address the problem of the long processing time of most segmentation models.
By using a skip connection to pass spatial information directly to the corresponding decoder, LinkNet manages to preserve low-level information without additional parameters and re-learning operations.
MANet, or Multi-scale Attention Net, is developed to improve accuracy in semantic segmentation of remote sensing images.
By using a novel attention mechanism, treating attention as a kernel function, <cit.> reduces the complexity of the dot-product attention mechanism to O(N).
U-Net is a well-known encoder-decoder segmentation model.
It is originally proposed by <cit.> for segmenting biological microscopy images.
U-Net++ is a variant of U-Net proposed by <cit.>.
In their work, they proposed a novel skip connection block in which a dense convolution block is used to process the input from the encoder feature map so that the semantic level of the input is closer to the corresponding decoder feature map.
MultiResUNet is another modern variant of U-Net proposed by <cit.> as a potential successor.
They used an Inception-like layer to replace the consecutive convolution layers after each pooling and transpose-convolution layers, to percept objects at different scales.
They adopted a chain of convolution layers with residual connections instead of plain skip connection to process the feature map inputs before concatenating them to decoder feature maps.
In our work, since the vanilla U-Net does not match the spatial resolution of our dataset, we used a costumed U-Net similar to <cit.> in all of our tests.
Convolution layers with different stride and padding are used in this structure to ensure the input and output dimension is identical.
§ EXPERIMENTAL RESULTS
In this section,
we provide quantitative and qualitative results to support our claim in Section <ref>.
§.§ Evaluation
We evaluate the models performance based on segmentation precision and reconstruction similarity.
§.§.§ Segmentation Precision
In terms of noise reduction precision, for a typical segmentation model, we can use the output to compare it with a binary image known as the truth mask to compute a score based on the number of pixels that get classified into the right categories.
For a restoration model like ours, we subtract the model output from the model input to compute the binary segmentation result.
We compare the results with the segmentation truth mask to compute the Dice, IoU, and Pixel Accuracy (PA).
§.§.§ Reconstruction Similarity
For assessing reconstruction similarity, we use two metrics: Structural Similarity Index Measure (SSIM) and PSNR_HVS_M.
SSIM is a commonly used measure of image similarity.
The PSNR metric known as PSNR_HVS_M <cit.> is considered to be a more accurate representation of image quality,
which takes into consideration the Contrast Sensitivity Function (CSF) and the between-coceptor contrast masking of Discrete Cosine Transform (DCT) basis functions.
§.§ Training
The neural networks discussed in the previous section were trained using PyTorch 1.10.1.
RMSprop <cit.>, a variant of stochastic gradient descent that divides gradients by an average of their recent magnitude, was used as the optimizer with a learning rate of 0.00001, momentum of 0.9, weight decay of 1e-8, and default values <cit.> for other parameters.
Three datasets were created in aforementioned process to train various denoising models.
For body marker annotation, a dataset of 83,900 pairs of noisy images generated from 4,975 clean images was used.
For radial line annotation, 80,000 pairs of noisy images were generated from 3,936 clean images.
For vascular flow annotation, 80,000 pairs of noisy images were generated from 250 clean images.
§.§ Optimal Model Structure
To find the most effective combination of network structure and training scheme for the given task, we trained different network structures under the Noise2Noise and Noise2Clean schemes using the body mark annotation dataset.
Though utilizing only one type of annotation, this experiment's results could demonstrate the likely most suitable structure for other annotations as well.
L_1 loss is used to trained these models.
The results were compared using segmentation precision and reconstruction similarity, and are presented in Tables <ref> and <ref>.
We observed that Noise2Noise training scheme improves segmentation precision and reconstruction similarity in most cases.
The results presented in Tables <ref> and <ref> indicate that the models trained using the Noise2Noise scheme generally achieved higher Dice scores, IoU scores, PA scores, and PSNR_HVS_M scores.
Specifically, for the costumed U-Net, we observed an increase in the Dice and IoU of 0.151 and 0.155, respectively, and an increase of 11.625 for the PSNR_HVS_M when using linearly normalized input.
According to our hypothesis, the Noise2Noise training process improves the model's ability to understand the features of annotations through solving an “impossible” task of relocating the annotation.
This task is essentially a self-supervised pretext training task that helps the model gain a better understanding of the annotations and the spatial structure of the ultrasonic images, thus gaining higher performance.
We also noted that the costumed U-Net structure performed the best out of all the structures tested.
It achieved the highest Dice, IoU, SSIM, and PSNR_HSV_M scores under both training schemes.
The costumed U-Net trained using the Noise2Noise scheme achieved the highest segmentation precision and reconstruction similarity of all models, with a Dice of 0.712, an IoU of 0.596, an SSIM of 0.967, and a PSNR_HVS_M of 41.628.
Given the above results, we chose the costumed U-Net as the optimal model for later experiments.
§.§ Optimal Loss Function
To find the optimal loss function, we evaluate the convergence speed of different loss functions.
The loss functions we tested include L_1 loss, Huber loss, Smooth L_1 loss, MSE loss and several combinations of aforementioned loss functions.
The result is shown in Fig. <ref>.
In order to better visualize the differences in convergence speed between the losses, we present them in separated subplots.
As shown in Fig. <ref>, the L_1 loss and its variants (Huber loss and Smooth L_1 loss) are displayed on one subplot,
while the MSE loss-related losses are presented on another subplot in Fig. <ref>.
We observed that implementing MSE loss results in faster convergence, allowing the model to reach convergence in under 100 steps, as shown in Fig. <ref>.
Meanwhile, as depicted in Fig. <ref>, the loss functions based on L_1 loss achieve a much slower convergence after approximately 500 to 600 steps.
Although Huber loss and Smooth L_1 loss seem to have a quicker rate of convergence, closer examination in Fig. <ref> reveals that they both take around 500 steps to converge, which is similar to the standard L_1 loss.
We also noted from Fig. <ref> that using a combination of MSE loss and different L_1 based losses doesn't significantly affect the rate of convergence, likely because the difference in scale between the MSE loss and L1 loss and its variants causes MSE loss to remain the primary determinant of convergence speed.
Our study also conducted an evaluation of the costumed U-Net trained using various loss functions.
Our findings in Tables <ref> and <ref> revealed that there was minimal difference between the performances of these models, with the largest discrepancies in Dice, IoU, PA, SSIM and PSNR_HVS_M amounting to 0.023, 0.019, 0.003, 0.011 and 4.031 respectively.
These outcomes suggest that the selection of alternative loss functions has little influence on the overall performance of the model.
As such, we decided not to employ the MSE loss function in subsequent experiments and instead continued to utilize the L_1 loss.
§.§ Noise2Noise with Other Annotations
The improvement observed in the costumed U-Net trained using the Noise2Noise scheme is also apparent in other annotation datasets, as shown in <Ref>.
In the provided tables, the costumed U-Net has been trained using other two annotation datasets along with two different training schemes. The outcomes show a substantial enhancement in comparison to the Noise2Clean models, as there is approximately a half unit gain observed in both Dice and IoU metrics, an increase of around 0.01 in SSIM, and a rise of 5 units in PSNR_HVS_M for both types of annotations.
§.§ Qualitative Results
In this section, we present denoised images from models trained under different schemes to further support our claim.
As can be seen in Figs. <ref>, <ref> and <ref>,
the output from the Noise2Clean model contains obvious artifacts, whereas models trained using the Noise2Noise scheme do not suffer from this problem.
It is also worth noting that in the output images from Noise2Clean models, information in the edge area is compromised.
In contrast, the Noise2Noise models preserve this information well.
The evidence implies that models trained with the Noise2Noise scheme possess superior capabilities in identifying and distinguishing noise.
§ DISCUSSION
This study proposed a self-supervised data generation and training approach to build a large and diverse datasets starting from a small dataset with only few clean images.
We find that the costumed U-Net trained with the Noise2Noise scheme outperformed other models in terms of segmentation precision and reconstruction similarity in the annotation removal task.
The benefits of Noise2Noise training were observed across most model structures tested, and the models trained using this scheme produced fewer artifacts.
Our study has some limitations:
Firstly, we used separate parameter sets for the segmentation task of different annotations.
However, with the recent advancement of deep learning theories, it is now possible to use a single parameter set for the segmentation of all annotations presented in the image.
Additionally, there is potential for further research in the area of language-guided segmentation models, which would provide a more precise and flexible interface for medical professionals.
We find building a model that incorporates these innovations intriguing.
We also noted that our model was trained in a self-supervised manner, meaning it has potentially gained a strong understanding of the structural features of ultrasonic images.
This understanding is beneficial for downstream models such as object detection model.
Different ways of fine-tuning, like Low-Rank Adaptation (LoRA), adapter layers, etc. should be explored to find the optimal method to effectively transfer this understanding.
We plan to address these issues in future studies.
*
|
http://arxiv.org/abs/2307.06864v1 | 20230710195854 | Higher-order composition of short- and long-period effects for improving analytical ephemeris computation | [
"Martin Lara",
"Elena Fantino",
"Hadi Susanto",
"Roberto Flores"
] | physics.class-ph | [
"physics.class-ph",
"math-ph",
"math.MP"
] |
[t1]A preliminary version of this research was presented as paper IAC-21-C1.7.2 at the 72nd International Astronautical Congress (Dubai, United Arab Emirates, 25-29 October 2021)
EF,RA]Martin Larafootnote1,footnote2
[email protected]
EF]Elena Fantinocorfootnote1
[email protected]
EF]Hadi Susantofootnote3
[email protected]
EF,RF]Roberto Floresfootnote1,footnote4
[email protected]
[EF]P.O. Box 127788, Abu Dhabi, United Arab Emirates
[RA]Edificio CCT, C/ Madre de Dios, 53, ES-26006 Logroño, Spain
[RF]Gran Capità s/n, 08034, Barcelona, Spain
[cor]Corresponding author
[footnote1]Aerospace Engineering Department, Khalifa University of Science and Technology
[footnote2]Scientific Computing and Technological Innovation Center, University of La Rioja
[footnote3]Mathematics Department, Khalifa University of Science and Technology
[footnote4]Centre Internacional de Mètodes Numèrics en Enginyeria (CIMNE)
The construction of an analytic orbit theory that takes into account the main effects of the Geopotential is notably simplified when splitting the removal of periodic effects in several stages. Conversely, this splitting of the analytical solution into several transformations reduces the evaluation efficiency for dense ephemeris output. However, the advantage is twofold when the different parts of the mean–to–osculating transformation are composed into a single transformation. To show that, Brouwer's solution is extended to the second order of the zonal harmonic of the second degree by the sequential elimination of short- and long-period terms. Then, the generating functions of the different transformations are composed into a single one, from which a single mean–to–osculating transformation is derived. The new, unique transformation notably speeds up the evaluation process, commonly improving evaluation efficiency by at least one third with respect to the customary decomposition of the analytical solution into three different parts.
Orbit propagation; Artificial satellite theory; Brouwer's solution; Hamiltonian simplification; Lie transforms;
§ INTRODUCTION
Simple analytic orbit prediction programs still find different astrodynamics applications <cit.>. They commonly rely on perturbation solutions that are made of secular and periodic terms. The former provide the average evolution of the orbit while the latter are needed to convert the secular elements —also called mean elements— into ephemeris. In the implementation of an analytical ephemeris generator one customarily takes the point of view of using programming techniques that minimize both memory requirements and execution time <cit.>. However, these two aims are mutually exclusive regarding accuracy (understood as the time span for which the errors remain below a given tolerance).
Minimizing memory requirements is obviously achieved when reducing the truncation order, and, therefore, the accuracy of the perturbation solution. On the other hand, reducing memory needs for a given truncation order can be achieved by splitting the periodic terms of the analytical solution into a sequence of simpler corrections. Separation of the periodic corrections into short- and long-period terms is a natural choice with whole dynamical sense <cit.>. Moreover, it is well known that the preliminary elimination of the parallax transformation <cit.> makes notably easier the implementation of the short-period elimination <cit.>. On the contrary, the long-period elimination is traditionally achieved with a single set of corrections, which is obtained either with Brouwer's traditional method <cit.>, in the reverse normalization style <cit.>, or like Alfriend and Coffey's halfway option <cit.>. Splitting the elimination of long-period terms into two simpler transformations is also possible, and helps in pruning away non-essential terms of the rotating-perigee regime, in this way effectively isolating the resonant, long-period terms of the dynamics about the critical inclination. However, the interest of this additional simplification is mostly theoretical since it yields negligible savings in memory storage and usually increases the execution time <cit.>.
While decomposing the transformation from mean to osculating elements into different parts clearly simplifies the periodic terms of the solution by notably reducing their size, the splitting procedure has the undesired side effect of slowing the evaluation of dense output ephemeris. This paradox stems from the fact that the eccentricity and inclination remain constant in the secular variables. Because of that, when the different transformations used in the construction of the analytical perturbation theory are combined into a single transformation, the coefficients of the trigonometric polynomials comprising the periodic corrections only need to be evaluated once, which is done jointly with the initialization of the secular terms of the analytical solution <cit.>. In this way, the repeated evaluation of the perturbation solution in the computation of ephemeris is notably accelerated. On the contrary, if a sequence of different transformations is used, then these coefficients only remain constant in the first transformation of the sequence, although some of them may remain constant also in the second one <cit.> —the most favorable case being provided by the reverse normalization scheme <cit.> in which the only terms that need reevaluation are eccentricity polynomials. This need of repeatedly evaluating coefficients made of inclination and eccentricity polynomials may counterbalance the advantages provided by the simpler form of the sequential periodic corrections, in this way clearly penalizing the efficiency of the analytical theory for dense output.
In order to demonstrate these facts, two alternative higher-order extensions of Brouwer's classical solution <cit.> have been implemented. More precisely, since the Earth's zonal harmonic coefficient of the fifth degree is at least one order of magnitude smaller than the zonal harmonic coefficients of lowers degrees, we neglected the contribution of the this zonal harmonic from Brouwer's Geopotential model. In addition, our approach takes the proper calibration of the mean semi-major axis into account <cit.>. In this way, the dominant long-term secular drift of the position errors in the along-track direction —which is typical of perturbation solutions relying on the physical time as the independent variable <cit.>— is reduced by at least one order of magnitude with respect to traditional implementations of the Geopotential disturbing effect.
§ ANALYTIC PERTURBATION SOLUTIONS. GENERAL FEATURES
Analytical solutions to orbital perturbation problems are commonly approached in a set of three oscillating and three rotating variables. The latter are naturally angles whereas the former may have different nature. A common choice is the traditional set of Keplerian elements given by the semi-major axis a, eccentricity e, inclination I, right ascension of the ascending node Ω, argument of the periapsis ω, and mean anomaly M. The perturbation solution is obtained through an analytical transformation to mean elements (a',e',I',Ω',ω',M') such that the first three remain constant, namely,
da'/dt=de'/dt=dI'/dt=0,
whereas the last three evolve at constant rates
dΩ'/dt=n_Ω, dω'/dt=n_ω, dM'/dt=n_M.
Because the exact transformation from mean to osculating variables does not exist in general, it is approximated with
𝒯:(a,e,I,Ω,ω,M;ϵ)↦(a',e',I',Ω',ω',M')
given by a truncated Taylor series in ϵ. The small parameter ϵ may be a physical quantity, the most desirable case, or a formal parameter —a token that indicates the strength of the disturbing forces relative to the integrable non-perturbed model. With formal parameters, the analytical solution is constrained to a particular dynamical regime, while physical quantities allow for greater generality.
The perturbation approach is not constrained to the use of Keplerian elements. It can be applied to different sets of singular or non-singular variables. In particular, canonical variables assign uniform dimension to the oscillating-type quantities. In that case, (a,e,I) are customarily replaced by (L,G,H). L=√(μa) is the Delaunay action, with μ denoting the gravitational parameter. G=Lη is the specific angular momentum, where η=(1-e^2)^1/2. Finally, H=GcosI denotes the third component of the angular momentum vector. The set (L,G,H,ℓ,g,h), with ℓ=M, g=ω, and h=Ω, is known as the Delaunay canonical variables. They are the action-angle variables in which a complete reduction of the Kepler Hamiltonian is achieved <cit.>.
It is worth remarking that, in the analytical solution by canonical methods, the transformation (<ref>) can be derived from a scalar generating function W=W(L,G,H,ℓ,g,h), simplifying the computational process <cit.>.
Hereafter, we shall limit the discussion to perturbed Keplerian motion and Hamiltonian perturbations in Delaunay variables. Then, the secular frequencies can be written in the general form <cit.>
n_M =ñ∑_i≥0ϵ^i/i!Φ_i(a'_0,e'_0,I'_0)
n_ω = ñ∑_i≥1ϵ^i/i!Γ_i(a'_0,e'_0,I'_0)
n_Ω = ñ∑_i≥1ϵ^i/i!Ψ_i(a'_0,e'_0,I'_0)
where Φ_i, Γ_i, and Ψ_i, are functions of the initial conditions in prime (mean) variables. Namely, cosI'_0=H'_0/G'_0, e'_0=(1-G'^2_0/L'^2_0)^1/2, a'_0=L'_0^2/μ, and ñ=(μ/a'_0^3)^1/2=μ^2/L'_0^3. The periodic corrections take the form of truncated multivariate Fourier series in the angle variables, with coefficients that are truncated series in the action variables. Commonly, these are expressed as eccentricity polynomials, with the coefficients given by inclination polynomials <cit.>.
The computation of the constants of the perturbation theory ñ, a'_0, e'_0, I'_0, Ω'_0, ω'_0, and M'_0, in mean elements, can be derived from a fit to observations <cit.>, which are obtained either from real data or synthetically generated by a preliminary numerical integration of one or two orbits <cit.>. Alternatively, the initialization constants can be obtained from an initial state vector in osculating elements by inverting the mean–to–osculating transformation of the perturbation theory <cit.>, an operation that is sometimes replaced by root–finding procedures <cit.>. Moreover, modern perturbation methods allow for the computation of both the direct and inverse transformations in explicit form <cit.>.
An intrinsic characteristic of perturbation solutions is that, due to the truncation of the series comprised in the solution, they always introduce an error in the secular frequencies. This happens even when the truncation is made to machine precision <cit.>. In consequence, the errors always undergo a secular drift which, eventually, prevails over the inaccuracies due to the truncation of the periodic corrections. Therefore, it is common practice to compute the periodic terms to one order less than their secular counterparts <cit.>. However, the proper propagation of the secular frequencies up to a given order requires the initialization of the constants of the perturbation theory to the same truncation order as the secular terms.
In the case of perturbed Keplerian motion, this accuracy can be relaxed in the initialization of n_ω and n_Ω because they are proportional to ϵ, as shown by the lower limit 1 of the summation index in Eqs. (<ref>) and (<ref>). Nevertheless, the higher accuracy is mandatory in the initialization of the secular mean motion n_M, for which the summation index in Eq. (<ref>) starts from zero. Neglecting this consideration causes errors in the in-track direction that are inconsistent with the truncation order of the secular terms of the analytical solution <cit.>. The remedy is to extend the computation of the periodic corrections of the semi-major axis to the same order as the secular terms <cit.>. When the perturbation solution is computed stepwise, in Brouwer's seminal style of removing the short-period terms before the long-period ones, these additional computations are limited to the short-period corrections of the semi-major axis.
For Hamiltonian perturbations, rather than computing additional terms of the periodic corrections to the semi-major axis, the semi-major axis can be calibrated to higher-order effects using the energy equation in the clever way proposed by Breakwell and Vagners <cit.>. The procedure relies on the fact that the energy value for given initial conditions does not change by a transformation of variables. When using orbital elements, the energy equation ℋ≡T+V=ℰ, where T and V denote the kinetic and potential energy, respectively, can be written in osculating variables like
ℋ≡-μ/2a+ϵ𝒫(a,e,I,Ω,ω,M)=ℰ.
On the other hand, after the complete Hamiltonian reduction, the energy equation takes the form
ℰ=-μ/2a'+∑_m=1^kϵ^m/m!ℋ_m(a',e',I')+𝒪(ϵ^k+1),
where ℋ_m are the computed Hamiltonian terms. Then, for a given initial state (a_0,e_0,I_0,Ω_0,ω_0,M_0), we compute the energy value ℋ(a_0,e_0,I_0,Ω_0,ω_0,M_0)=ℰ_0 exactly from Eq. (<ref>) and replace ℰ=ℰ_0 in Eq. (<ref>), from which
ℰ_0+μ/2a'_0-∑_m=1^kϵ^m/m!ℋ_m(a'_0,e'_0,I'_0)=Δ,
where a'_0, e'_0, and I'_0 are obtained from the transformation from mean (prime) to osculating variables. If this transformation is known to 𝒪(ϵ^k), the energy equation (<ref>) is certainly accurate to Δ=𝒪(ϵ^k+1). However, if the transformation is only known to 𝒪(ϵ^k-1), as it is commonly the case, then the error in the energy equation will be only Δ=𝒪(ϵ^k) due to the propagation of errors in the Keplerian term. The issue is easily fixed by replacing the value a'_0 obtained from the 𝒪(ϵ^k-1) mean to osculating transformation by the calibrated value
â_0=1/2μ[-ℰ_0+∑_m=1^kϵ^m/m!ℋ_m(a'_0,e'_0,I'_0)]^-1,
which is obtained by solving Eq. (<ref>) with Δ=0 for the Keplerian term. Then, a'_0 is replaced by â_0 in the computation of ñ. That is, we replace ñ=(μ/â_0^3)^1/2 in Eqs. (<ref>)–(<ref>).
This calibration procedure avoids the need of carrying out the heavy computations required for extending the truncation order of the mean to osculating transformation, and generally guarantees that the predictions of the perturbation theory are close to the expected accuracy for a given truncation order <cit.>.
Regarding the periodic corrections, when they are given by a single set of corrections from mean to osculating elements, we only need to update the mean angles of the analytical perturbation solution for ephemeris evaluation. Indeed, because the action variables remain constant in mean elements, they only need to be computed once, during the initialization of the solution <cit.>. However, perturbation solutions are normally constructed stepwise, by splitting the transformation (<ref>) into two or more canonical steps (see summary in Table <ref>). In that case, the different transformations can be combined into a single one <cit.> to take advantage of the previously mentioned fact. This composition into a single transformation is immediate when the periodic corrections are constrained to the first order of the perturbation, as done by Brouwer <cit.>. Conversely, when higher-order terms of the periodic corrections are included, they are usually arranged in separate blocks. This simplifies the implementation of an analytic ephemeris generator and reduces memory requirements <cit.>. The downside is that both action and angle mean variables must be updated at each step. This degrades the efficiency of dense ephemeris evaluation.
§ GEOPOTENTIAL MODEL AND PERTURBATION SOLUTION.
For reference, we deal with the popular Geopotential solution derived by Brouwer <cit.>. More precisely, in view of the small value of the Earth's zonal harmonic coefficient of degree 5, we limit the dynamical model to the contribution of the 2nd, 3rd, and 4th zonal harmonics, which is the same model used in <cit.>. The disturbing function of the corresponding Hamiltonian is
𝒫=μ/r∑_i≥2R_^i/r^iJ_iP_i(sinφ),
in which r is distance from the Earth's center of mass, R_ is the Earth's equatorial radius, J_i stands for the zonal harmonic coefficient of degree i, P_i denotes the Legendre polynomial of degree i, and φ is latitude.
We adhere to Kaula's style <cit.> yet in the slightly different arrangement of <cit.>. Thus, we write the disturbing potential (<ref>) in the form
𝒫=μ/a(a^2/r^2η)∑_i≥2J_iV_i,
in which
V_i = R_^i/p^iη∑_j=0^iℱ_i,j(s)∑_k=0^i-1i-1ke^kcos^kf
×cos[(i-2 j)(f+ω)-π(i2)],
where p=aη^2 is the orbit parameter, f is the true anomaly, s stands for the sine of the inclination, and ℱ_i,j are particularizations of Kaula inclination functions for the zonal problem. Namely, for i≥2l,j≥l,
ℱ_i,j=∑_l=0^min(j,i_0)(-1)^j-l-i_0/2^2i-2l2i-2liili-2lj-ls^i-2l,
where i_0=⌊1/2i⌋ denotes the largest integer less than or equal to 1/2i.
§.§ Short-period elimination
The removal of short-period terms is standard <cit.>. The solution of the integrals involved in the perturbation approach becomes easier applying the elimination of the parallax simplification <cit.> before addressing the short-period terms. To the first order of J_2, the elimination of the parallax transformation
(x,X;ϵ)𝒯_1⟶(x',X'),
where x denotes the coordinates of the canonical set and X their conjugate momenta, is derived from the generating function
W^P=W_1^P+J_2W_2^P.
The terms on the right-hand side of Eq. <ref> are given by
W_1^P= G/8R_^2/p^2{ 2e(3s^2-2) sinf
-s^2[3 e sin (f+2ω)
+3 sin (2 f+2ω)+e sin (3 f+2ω)] },
W_2^P= -GR_^3/p^3J̃_3∑_i=0^1∑_j=2i-1^2i+3∑_k=0^1e^2ks^2i+1e^|j-2i-1|
× Q_i,j,kcos[jf+(2i+1)ω]
+GR_^4/p^4
∑ _i=0^2 ∑ _j=2 i-3^2 i+3∑ _k=0^1
e^2 k s^2 i e^| j-2 i| P_i,j,ksin(jf+2iω),
where J̃_n≡J_n/J_2^2, and the inclination polynomials Q_i,j,k and P_i,j,k can be found in Tables <ref> and <ref> of the Appendix.
The transformation (<ref>) is obtained by simple evaluation of Poisson brackets in a convenient set of variables <cit.>. In particular, we compute
[ y_1={x,W_1^P}, y_2={x,W_2^P}+{y_1,W_1^P},; Y_1={X,W_1^P}, Y_2={X,W_2^P}+{Y_1,W_1^P}, ]
where the braces denote the Poisson bracket operator. Replacing (x,X) with (x',X') in y_1, Y_1, and y_2, Y_2, the mean to osculating transformation takes the form
[ x = ∑_j≥0(ε^j/j!)y_j(x',X'),; X = ∑_j≥0(ε^j/j!)Y_j(x',X'). ]
The new Hamiltonian, with the parallax eliminated, depends on the Delaunay prime variables. Next, the complete removal of short-period terms is achieved by the Delaunay normalization <cit.>. The transformation to double-prime variables
(x',X';ϵ)𝒯_2⟶(x”,X”),
is derived from the new generating function
W^D=W_1^D+J_2W_2^D,
with
W_1^D= GR_^2/p^21/4(3s^2-2) ϕ,
W_2^D= -GR_^4/p^4(3s^2-2)^2/32 (η +1)(4esinf+e^2sin2f)
-G
×R_^3/p^3J̃_33/4es(4-5 s^2)ϕsinω
-GR_^4/p^43/64ϕ{35
×
s^4(1-5J̃_4)-40s^2(2-5J̃_4)+40(1-J̃_4)
+η^2[5s^4(21J̃_4+1) +8s^2(1-15 J̃_4) +8(3J̃_4
-1)] +2[5s^2(7J̃_4+3)-2(15J̃_4+7)]
× e^2s^2cos2ω},
where ϕ=f-ℓ denotes the equation of the center. The transformation (<ref>) is obtained analogously to the previous case, simply replacing W^P with W^D and adding one prime to the variables in Eqs. (<ref>) and Eq. (<ref>).
Once the short-period terms have been removed, up to the third order of J_2 we obtain the Hamiltonian,
ℋ=ℋ_0+J_2ℋ_1+J_2^2ℋ_2+J_2^3ℋ_3,
where
ℋ_0= -μ/2a,
ℋ_1= -μ/2aR_^2/p^2η1/2(2-3s^2),
ℋ_2= μ/2aR_^3/p^3J̃_33/2(5s^2-4)ηessinω
+μ/2aR_^4/p^4∑_i=0^1∑_j=0^2-2it_2,i,jη^j+1e^2icos2iω,
ℋ_3= -μ/2aR_^6/p^6[J̃_3/R_/p∑_i=1^2∑_j=0^6-3ie^2i-1sin(2i-1)ω/(1+η)^i2
×η^j+1u_3,i,j
+∑_i=0^2∑_j=0^4-it_3,i,je^2icos2iω/(1+η)^i2η^j+1].
The non-vanishing inclination polynomials t_l,k,j, u_3,k,j, are listed in Tables <ref> and <ref> of the Appendix. We recall that the variables in these expressions must be written in terms of the double-primed Delaunay variables. Note that ℋ=ℋ(a,e,I,-,ω,-) is free of the mean anomaly up to the truncation order. Therefore, the mean semi-major axis a=μ/L'^2 becomes a formal integral of the long-period Hamiltonian (<ref>).
§.§ Long-period elimination
Removal of long-period terms from the Hamiltonian (<ref>) is achieved by a transformation to triple-prime variables
(x”,X”;ϵ)𝒯_3⟶(x”',X”').
The generating function of the long-period elimination is
W^L=W_1^L+J_2W_2^L,
where
W_1^L= GR_^2/p^25s^2(7J̃_4+3)-2(15J̃_4+7)/32(5s^2-4)e^2s^2sin2ω
+GR_/p1/2J̃_3escosω,
W_2^L=
GR_^4/p^41-η/(5s^2-4)^3{∑_j=0^3u_0,jη^jsin2ω
+(1+η)
× u_0,4e^2sin4ω}
-GR_^3/p^3J̃_3/(5s^2-4)^21/η +1
×[∑_j=0^3u_1,jη^jecosω +(1+η)u_1,4e^3cos3ω]
-GR_^2/p^2J̃_3^215s^2-13/8(5s^2-4)s^2e^2sin2ω.
The inclination polynomials u_i,j are given in Table <ref> of the Appendix. The transformation (<ref>) achieving the complete Hamiltonian reduction is obtained from an analogous to Eqs. (<ref>)–(<ref>), using W^L as generating function.
The Hamiltonian with the periodic terms removed takes the form
𝒦=∑_i=0^3(J_2^i/i!)𝒦_i,
in triple prime variables. Up to 𝒪(J_2^3), the Hamiltonian terms 𝒦_0=ℋ_0 and 𝒦_1=ℋ_1 remain the same in the new variables, whereas
𝒦_2= -μ/2aR_^4/p^43/32η{η^2[5(21 J̃_4+1)s^4 -8(15J̃_4-1)
× s^2+8(3J̃_4-1)] +4η(3s^2-2)^2 +35s^4(1
-5J̃_4)-40(2-5J̃_4)s^2+40(1-J̃_4) },
𝒦_3= μ/2aR_^6/p^6η{9J̃_3^2/8R_^2/p^2[η^2(20s^4-22s^2+4)-25s^4
+26 s^2-4]
-∑_j=0^1∑_k=0^2-je^2kη^j(3s^2-2)^j/(5s^2-4)^2-2jl_j,k},
with the inclination polynomials l_j,k given in Table <ref>. Finally, the Hamilton equations of Eq. (<ref>) yield the secular variations
n_Ω=∂𝒦/∂H”',
n_ω=∂𝒦/∂G”',
n_M=∂𝒦/∂L”',
which are commonly reformulated in non-singular variables to avoid issues with circular and equatorial orbits. Nonetheless, the critical inclination singularity occurring when s^2=4/5, as follows from denominators in Eqs. (<ref>), (<ref>), and (<ref>), cannot be avoided due to its essential character <cit.>. Because the Hamiltonian (<ref>) is not applicable to librating-perigee orbits, accidental overflows can happen in a general propagation with the analytical solution. Several ways to circumvent this problem exist <cit.>.
§.§ Composition of transformations
The generating function of the short-period elimination is obtained by composing the elimination of the parallax and the Delaunay normalization into a single canonical transformation 𝒯^S=𝒯_1∘𝒯_2 using the Lie transforms technique (see <cit.>, or 2.1.4 of <cit.>). The composite transformation is readily derived from a generating function obtained as the direct sum of the respective generating functions, both written in the same set of variables. Thus, the first step is to reformulate W^D in the osculating (non-primed) variables. Instead of replacing the transformation equations and rearranging terms of the same order of the small parameter, the standard Lie transforms method can be applied to reformulate the generating function <cit.>.
The first-order term of the generating function for the composite transform 𝒯^S is
W_1^S=W_1^P+W_1^D,
where the last summand is obtained by substituting prime with non-primed variables in Eq. (<ref>). The second-order term W_2^S=W_2^P+W_2^D results in
W_2^S= GR_^3/p^3J̃_33/4(5s^2-4)esϕsinω
+GR_^4/p^43/64ϕ{[4
×(15J̃_4+7) -10(7J̃_4+3)s^2] e^2s^2cos2ω
+e^2 [5 s^4(21 J̃_4+1) -8 s^2(15 J̃_4-1)+8(3 J̃_4
-1)]
+2 [5 s^4(7 J̃_4-4)-4 s^2(10 J̃_4-9)+8
× (J̃_4-2)]
+4(5s^2-4) s^2[3ecos(f+2ω)
+ecos(3f+2ω) +3cos(2f+2ω)]
}
+GR_^4/p^4
×1/256∑_i=0^2∑_j=2i-3^2i+3sin(jf+2iω)∑_k=0^3η^ke^|j-2i|/1+η
× s^2iQ_i,j,k^*
-GR_^3/p^3J̃_3∑_i=0^1∑_j=2i-1^2i+3∑_k=0^1e^|j-2i-1|
e^2ks^2i+1(5s^2-4)^1-iq_i,j,k^*cos[jf+(2i+1)ω],
also expressed in osculating variables. The inclination polynomials q_i,j,k^* and Q_i,j,k^* are given in Tables <ref> and <ref>.
Analogously, the composition of the short- and long-period elimination into a single transformation 𝒯=𝒯^S∘𝒯_3 requires the reformulation of W^L in osculating variables. The first-order term is obtained from Eqs. (<ref>) and (<ref>) as
W_1=W_1^S+W_1^L,
where W_1^L results from swapping double-prime with non-primed variables. The second-order term W_2=W_2^S+W_2^L is given by
W_2= GR_^3/p^3J̃_33/8ϕ(5s^2-4)e s sinω
-GR_^2/p^2J̃_3^2e^2 s^2
×15s^2-13/8(5s^2-4)sin2ω
+GR_^4/p^43/64ϕ{[2(15J̃_4+7)
-5s^2(7J̃_4+3)]e^2s^2cos2ω +e^2[5 s^4(21 J̃_4+1)
-8 s^2(15 J̃_4-1)+8(3 J̃_4-1)]
+2[5 s^4(7 J̃_4
-4) -4 s^2(10 J̃_4-9)+8(J̃_4-2)]
+4(5s^2
-4)s^2[3ecos(f+2ω) +ecos(3f+2ω) +3
×cos(2f+2ω)] }
-GR_^3/p^3J̃_3s/1+η∑_i=0^1∑_j=i-1^2i+3∑_k=0^3
η^ke^|j-2i-1|q_i,j,k/(5s^2-4)^2cos[jf+(2i+1)ω]
+GR_^4/p^4
∑_i=0^2∑_j=-1^2i+3∑_k=0^3η^k/1+ηe^|j-2i|Q_i,j,k/(5s^2-4)^3sin(jf+2iω),
with the inclination polynomials q_i,j,k, Q_i,j,k listed in Tables <ref> and <ref>.
One can arrive to the fully-reduced Hamiltonian (<ref>) through different transformations. For example, the alternative sequence given by the elimination of the parallax, followed by elimination of the perigee and, lastly, Delaunay normalization <cit.>. This sequence will yield different second-order terms for the second and third transformations. However, the composition of their generating functions will still yield the same W_1 and W_2 as in Eqs. (<ref>) and (<ref>).
Once the generating functions have been merged into a single one, the mean–to–osculating transformation is computed from expressions analogous to Eqs. (<ref>) and (<ref>).
§ EFFICIENCY TESTS
Splitting the complete reduction of the zonal problem simplifies construction of the analytical solution, and helps understand essential aspects of the dynamics. This kind of decomposition can be applied also to the long-period elimination, but the modest improvement in memory storage does not warrant implementation in an analytic orbit propagator <cit.>. On the other hand, the composition into a single transformation has the drawback of increasing substantially the total size of the corrections. However, we will show that the transformation from secular terms to osculating variables in different steps can also have important shortcomings in practice.
Increasing the size of the formal series representing the solution does not necessarily lead to increased computational burden. The factorization of the inclination polynomials in the single-transform solution reveals multiple occurrences of common factors. This makes the composite transform amenable to additional optimizations compared to a sequence of different transformations. This was the case for the simpler J_2-problem solution, where an optimizing compiler produced code competitive with the stepwise evaluation of the analytical solution <cit.>. Additionally, when implementing an ephemeris generator, the splitting approach faces the obvious handicap of evaluating both action and angle variables at each step, whereas the single transformation only updates the angles. Thus, if memory requirements are not critical, the single transformation is preferable in practice.
To compare the relative merits of each approach, we implemented two analytical orbit generators based on the extended Brouwer's solution, retaining secular terms up to the third order of J_2 and periodic corrections up to the second order. Both codes were written in Fortran 77. The composite code uses the results from Section <ref>. The algorithm split in multiple transformations follows the classical approach of eliminating the parallax first, followed by removal of the perigee, and final Delaunay normalization. This sequence, denoted hereafter as PPD, is usually considered the most efficient approach <cit.>. The inverse transform (osculating to mean elements) is only evaluated once, during initialization of the analytical solution. Given that the impact for dense ephemeris output is negligible, we always used the simpler code (PPD) for the inverse transformation.
The only manual optimization in the implementation of the algorithms is the factorization of the inclination polynomials. Both codes were generated with the optimization option on Absoft Pro Fortran 16.0.2 compiler. The size of the single-transformation executable is 30% larger than the PPD implementation, a hint of its higher memory usage.
We tested the execution time for different orbital regimes. For a dense ephemeris evaluation of 3000 points we found that the single-transformation code was at least 30% faster than the classical PPD implementation in all cases.
Our implementations use transformations based on canonical polar variables (compatible with circular orbits) widely recognized as faster to evaluate <cit.>. Comparative performance may change slightly for other sets of variables, but the overall trend is expected to remain the same.
Regarding accuracy, both implementations behave as expected from a perturbation theory. There are differences in the errors for each test case, but they are of the same order as the neglected terms of the perturbed solution. Figures <ref>-<ref> compare the evolution over 30 days of the errors in the along-track, radial, and cross-track directions for three representative orbits borrowed from <cit.>. Namely, a TOPEX-type orbit, close to the critical inclination but still within the realm of validity of the analytical solution (a=7707.270 km, e=0.0001, I=66.04^∘);
a PRISMA-type orbit, strongly affected by the zonal perturbation due to its low altitude (a=6878.14 km, e=0.001, I=97.42^∘);
and a highly elliptic geostationary transfer orbit (GTO, a=24460 km, e=0.73, I=30^∘),
with large variations in the strength of the perturbations.
As shown in the figures, both approaches yield very similar error trends. The largest difference lies in the cross-track error of the Topex orbit (Fig. <ref>, bottom), for which there is no immediate explanation. Even in this case, the error magnitude remains within the expected bounds given the truncation order of the theory. It is worth recalling that the constants of the solution have been initialized in Breakwell and Vagners' style <cit.> for both codes. This balances the errors in the three directions, commonly improving by one or two orders of magnitude the secular growth of the errors.
§ CONCLUSIONS
We present a higher-order extension (second order for periodic corrections and third for secular terms) of Brouwer's gravitational solution to the artificial satellite problem. Routinely, analytical theories are constructed removing the periodic terms in multiple stages. A standard approach is preliminary simplification (parallax elimination) followed by removal of long- and short-period terms. This step-by-step strategy simplifies the construction of higher-order solutions and yields more compact formulas for the periodic corrections. Alternatively, the different stages can be composed into a single transformation between mean and osculating variables. The composite transform gives rise to more complex expressions, a disadvantage for understanding fundamental aspects of the dynamics, as well as for code readability. However, while the formal series representing the solution increases in size, it contains multiple repetitions of common factors in the inclination polynomials. These recurring terms open the door for additional optimization of the calculations. Furthermore, generating ephemeris with the standard —multi-step— approach requires evaluating action and angle variables at each step. The composite transform, on the other hand, only updates the angles further improving performance.
We compared the efficiency and accuracy of a popular multi-step implementation —PPD, short for parallax elimination, removal of perigee and Delaunay normalization— against the monolithic transformation for three representative orbits (TOPEX, PRISMA and GTO typologies). The composite approach lowered run times by more than 30% in all cases, while maintaining the accuracy expected from the truncation order or the theory. On the negative side, the code size, 30% larger than the PPD version, reflects the higher complexity of the associated formulas.
Our results show that, for the cases tested, the single-transformation algorithm delivers a substantial improvement in computational performance. This gain in speed must be balanced out against code simplicity and size, areas where the PPD implementation excels. In situations where the trade-off is acceptable, the monolithic approach should be considered seriously for building analytical propagators.
§.§ Acknowledgments
The authors acknowledge Khalifa University of Science and Technology's internal grant CIRA-2021-65/8474000413. ML also acknowledges partial support from the European Research Council (Horizon 2020 grant agreement No 679086 COMPASS) and the Spanish State Research Agency and the European Regional Development Fund (Projects PID2020-112576GB-C22 and PID2021-123219OB-I00, AEI/ERDF, EU). EF has been partially supported by the Spanish Ministry of Science and Innovation under projects PID2020-112576GB-C21 and PID2021-123968NB-100.
elsarticle-num
§ TABLES OF INCLINATION POLYNOMIALS
|
http://arxiv.org/abs/2307.05755v1 | 20230711192137 | QCD on Rotating Lattice with Staggered Fermions | [
"Ji-Chong Yang",
"Xu-Guang Huang"
] | hep-lat | [
"hep-lat",
"hep-ph",
"hep-th",
"nucl-th"
] |
[email protected]
Department of Physics, Liaoning Normal University, Dalian 116029, China,
Center for Theoretical and Experimental High Energy Physics, Liaoning Normal University, Dalian 116029, China.
[email protected]
Physics Department and Center for Particle Physics and Field Theory, Fudan University, Shanghai 200438, China,
Key Laboratory of Nuclear Physics and Ion-beam Application (MOE), Fudan University, Shanghai 200433, China,
Shanghai Research Center for Theoretical Nuclear Physics, Natural Science Foundation of China and Fudan University, Shanghai 200438, China
We investigate the finite-temperature quantum chromodynamics (QCD) on a rotating lattice with N_f=2+1 staggered fermions and the projective plane boundary condition. We observe a negative rotational rigidity (defined in the main text) and a negative quark spin susceptibility associated with the chiral vortical effect. In contrast to most of the effective model predictions, we find that the chiral condensate decreases and the Polyakov loop increases with imaginary rotation, implying a rotational catalysis of chiral symmetry breaking and confinement by real rotation. We determine the phase boundaries for both chiral and confinement-deconfinement phase transitions on the Ω_I-T plane, where Ω_I is the imaginary angular velocity.
QCD on Rotating Lattice with Staggered Fermions
Xu-Guang Huang
August 12, 2023
===============================================
Introduction. — Fast rotation plays an important role in many quantum chromodynamics (QCD) systems. For instance, some neutron stars may rotate at angular velocities close to their Keplerian frequencies, which can affect their evolution, structure, magnetic fields, and stability <cit.>. In relativistic heavy-ion collisions at the RHIC and LHC, very strong fluid vorticity (i.e., local angular velocity of the fluid cell) can be generated by the large angular momentum of the colliding nuclei <cit.>. The rotation or fluid vorticity can significantly influence the quark-gluon plasma and induce novel spin-related quantum phenomena, such as the chiral vortical effect (CVE) <cit.>, which is the generation of vector or axial currents along fluid vortex, and the spin polarization of hyperons and spin alignment of vector mesons which have been observed recently <cit.>.
Recently, the effect of rotation on the QCD phase structure has been extensively investigated <cit.>. Effective models have shown that rotation, at finite temperature, density, and magnetic field, acts like an effective chemical potential <cit.>. (It has also been suggested that uniform rotation is thermodynamically invisible at zero temperature, density, and magnetic field <cit.>.) Consequently, rotation tends to reduce the chiral condensate and the chiral critical temperature T_χ <cit.>. Novel rotation-induced pion condensate may also emerge <cit.>. The effect of rotation on deconfinement phase transition remains controversial. Holographic QCD models <cit.>, hadron resonance gas model <cit.>, and perturbative calculation <cit.> suggest a decrease of deconfinement temperature T_d by rotation. Other numerical simulations for pure gluons indicate mixed confinment-deconfinment phases <cit.>. However, lattice simulations of SU(3) gluondynamics appear to support an enhancement of confinement by rotation <cit.>.
This paper aims to provide a comprehensive study of hot QCD under rotation using lattice simulations with N_f=2+1 staggered fermions. The lattice QCD approach to the rotating system in the quenched approximation was first introduced in Ref. <cit.> and subsequent studies of the SU(3) pure Yang-Mills case were conducted in Refs. <cit.>. The rotation is implemented by simulating a rest state in a rotating frame, which is equivalent to simulating a rotating state in an inertial frame (See Appendix). In this work, we construct the lattice action following Ref. <cit.> but with dynamic staggered quarks and a projective plane boundary condition.
Formulation. — The QCD Lagrangian in a frame rotating about the z-axis at a constant real angular velocity Ω is ℒ_ QCD=ℒ_ G + ℒ_ F, where
ℒ_ G=-1/2g^2_sg^μνg^ρσ tr(F_μρF_νσ),
ℒ_ F=q̅[iγ _μ(∂ _μ+iA_μ+Γ _μ)-m]q ,
and A_μ and q are gluon and quark fields, and m= diag(m_l, m_l, m_s) is the mass matrix for N_f=2+1 flavors. The Lorentzian-signature metric and the spin connection Γ_μ are given by
(g_μν)=([ 1-r^2Ω^2 yΩ -xΩ 0; yΩ -1 0 0; -xΩ 0 -1 0; 0 0 0 -1 ]),
Γ _μ=-i/4σ ^abw_μ ab, w_μ ab=g_αβe_a^α(∂ _μe_b^β+Γ ^β_μνe_b^ν),
where r=√(x^2 + y^2) is the transverse distance, Γ ^μ_αβ is the Christoffel symbol, σ^ab≡ i [γ^a, γ^b]/2, and e_a^μ is the vierbein with e_0=(1,yΩ,-xΩ,0), e_1=(0,1,0,0), e_2=(0,0,1,0), and e_3=(0,0,0,1). The Euclidean action is obtained by the Wick rotation which, however, generates a complex Euclidean-signature metric and causes a “sign problem” in Monte Carlo simulations. To avoid this sign problem, we follow Ref. <cit.> and replace Ω by an imaginary rotation Ω_I, Ω→ i Ω _I, in our simulation.
We use the same discretization of the gauge action S_ G as in Ref. <cit.> (see also Appendix). We discretize the quark action S_ F using staggered quarks:
S_ F=∑ _n {∑ _μ∑ _δ = ±μψ̅ (n)V(n,n+δ̂)ψ (n+δ̂)
+2a m ψ̅ (n)ψ (n)+Ω_I/4(b_x,y-b_y,x)
+aΩ_I/8∑ _s_x,y,τ=± 1s_τη _s_ττ(n)η _xy(n)
×ψ̅ (n)U(n,n+∑ _i=x,y,τs_i î)ψ(n+∑ _i=x,y,τs_i î)},
where ψ is the staggered quark field <cit.>, a is the lattice spacing, β≡ 2N_c / g_s^2 is the lattice coupling (N_c=3), μ̂ is the dimensionless unit vector along μ-axis, and η _μ(n) is defined on links by η _μ(n)=(-1)^∑ _ν<μn_ν and η _-μ(n)=-η _μ(n-μ̂). In the above, U(n_1,n_2) is the average of the shortest Wilson lines connecting n_1 and n_2, V(n_1,n_2)≡ U(n_1,n_2)η(n_1,n_2) with η (n_1,n_2) the product of η _μ along the shortest path connecting n_1 and n_2, and
b_i,j≡∑ _s_τ,i=± 1 s_τs_i n_j ψ̅(n-s_i î)V(n-s_i î,n+s_ττ̂+s_i î)
×ψ (n+s_ττ̂ + s_i î),
η _xy(n)=η _x(n)η _y(n).
A uniformly rotating system must be finite to preserve causality. Therefore, the boundary condition along the directions perpendicular to the rotation axis is crucial. We use a special periodic boundary condition that makes the x y-plane of the lattice a projective plane, as shown in Fig. <ref>. This boundary condition has two advantages. First, it ensures a smooth gauge action at the boundaries and reduces the rotation-independent effects from the boundaries <cit.>. Second, unlike the Dirichlet boundary condition, it allows rotational spinor eigenstates to exist (see Appendix).
We implement the Monte Carlo simulation on N_x^3× N_τ=12^3× 4 and 12^3× 6 lattices with N_f=2+1 dynamic staggered quarks and various β. The bare masses are m_l≈ 20 MeV for u, d quarks and m_s=5m_l for the strange quark. We set the chemical potential to zero. Other lattice parameters are listed in the Appendix.
Rotational rigidity and spin susceptibility. —
We first consider the response of the QCD matter to rotation, namely, the generation of angular momentum and fermionic current (i.e., the CVE) by rotation. The angular momentum in QCD draws a lot of attention because of the proton spin puzzle <cit.> and the observation of global spin polarization/alignment of hadrons in heavy-ion collisions <cit.>. For the chemical potential is zero, only the axial CVE is present which is closely related to the quark spin contribution to the angular momentum. We concentrate on the case of N_τ=6.
The angular momentum (density) J of QCD can be decomposed into different components in different ways. We use Ji's decomposition <cit.>, J= J_G+∑_f ( s_f+ L_f), in which
J_G = ∑ _a x×( E^a × B^a),
s_f = q_f ^†Σ/2 q_f,
L_f = 1/i q_f^† x× (∂-i A) q_f,
where Σ= diag(σ, σ) with σ the Pauli matrices. In this decomposition, J_G is the gluon angular momentum, s_f is the quark spin of flavor f, and L_f is the quark orbital angular momentum; they are all gauge invariant. When r is small, it is known that the radial distributions of J^z_G(r) and L^z_f(r) are approximately quadratic, while s^z_f(r) is insensitive to r <cit.>. Under imaginary rotation, the angular momentum is also imaginary. We thus compute the ratios ξ_f≡ s^z_f(r)/Ω, ρ_J_G≡ J^z_G(r)/(Ω r^2), ρ_L_f≡ L^z_f(r)/(Ω r^2) on lattice which are real before and after Wick rotation. They characterize the strengths of different components of J in response to uniform rotation. We call ρ's the rotational rigidities and ξ_f the spin susceptibility of flavor f [We use ξ_f instead of the more natural definition χ_Ω^f≡∂ s_f^z/∂Ω for spin susceptibility because ξ_f can be more accurately measured on lattice. They almost coincide as s_f^z is roughly linear in Ω within the computational error (see Fig. <ref>).]. Note that, for a non-relativistic system, such defined ρ reduces to the mass density.
We average ⟨ρ _J_G⟩, ⟨ρ _L_f⟩ and ⟨ξ _l,s⟩ over the whole lattice and show the results in Figs. <ref> and <ref>. Within the statistical error, ⟨ρ̅_L_s⟩ (space-averaged quantities are denoted with an overbar) is the same as ⟨ρ̅_L_l⟩, and therefore only ⟨ρ̅_L_l⟩ is shown. We find that, over a large temperature regime T≈100-300 MeV, ⟨ρ̅_J_G⟩ is negative with magnitude slightly decreasing with the growth of Ω_I. On the other hand, ⟨ρ̅_L_l,s⟩ are also negative but insensitive to Ω_I. The negativity may indicate a thermodynamic instability of QCD against uniform rotation. A recent lattice simulation for SU(3) pure gluons obtains a negative moment of inertial which is equivalent to the negativity of ⟨ρ̅_J_G⟩ <cit.>. Note that negative angular momenta were also obtained in Ref. <cit.> using Wilson-Dirac fermions with quenched approximation.
In the explored temperature region, T≈100-300 MeV, ξ̅_l,s are found almost unchanged for different Ω_I and T: ξ̅_l=-0.0063(1) a^-2 and ξ̅_s=-0.0068(1) a^-2. Using iγ _4^E Σ_3^E/2=γ _3^Eγ _5^E/2, this translates to the CVE for axial currents J^z_5f=⟨q̅_fγ^3γ_5 q_f⟩ under real rotation Ω, J̅^a_5f=2ξ̅_fΩ, with 2ξ̅_f≈ -0.472(5) T^2 (insensitive to flavor) re-interpreted as the (unrenormalized) CVE conductivities.
Polyakov loop and chiral condensate. — To investigate the rotational effects on QCD phase transitions, we measure the Polyakov loop and chiral condensate. We use the renormalized Polyakov loop <cit.>
L_ ren=exp (-N_τc(β)a/2)L_ bare,
where L_ bare is the bare Polyakov loop defined as L_ bare= | tr[∑ _ n∏ _τU_τ( n, τ)]|/3N_x^3, where U_τ is the gauge link along τ direction and c(β) is a subtraction constant to match the static quark potential V(r)=12π / r -σ r (σ is the string tension) at r=1.5r_0, where r_0=0.5 fm is the Sommer scale <cit.>. We use the renormalized chiral condensates
Δ _l,s(T,Ω_I)=⟨ψ̅_lψ_l⟩ _T,Ω_I -m_l/m_s⟨ψ̅_sψ_s ⟩ _T,0/⟨ψ̅_lψ_l⟩ _0,0-m_l/m_s⟨ψ̅_sψ_s⟩ _0,0,
where ⟨ψ̅ _fψ _f⟩ _T,Ω_I is measured at temperature T and imaginary angular velocity Ω_I.
The subtracted term in the numerator is to eliminate the quadratic divergence proportional to m_f, and the denominator is a normalization to eliminate multiplicative renormalization factors, so that Eq. (<ref>) is finite and well defined in the continuous limit <cit.>.
In Fig. <ref>, we show ⟨ L_ ren⟩ and Δ _l,s as functions of Ω_I.
In the explored temperature region, we observe that ⟨ L_ ren⟩ increases with Ω_I while Δ _l,s decreases with Ω_I. This is more clearly seen at low temperatures, while at higher temperatures, both ⟨ L_ ren⟩ and Δ _l,s become less sensitive to Ω_I. Such behavior indicates that the imaginary rotation tends to melt the chiral condensate and to break the confinement. Similar behavior in simulations for pure gluons <cit.> and with Wilson fermions <cit.>.
To locate the phase transition lines, we examine the disconnected susceptibilities of chiral condensate and Polyakov loop. They are defined as χ _f, disc=N_f^2[⟨ tr(D_f^-1)^2⟩-⟨ tr(D_f^-1)⟩^2]/16N_x^3N_τ and χ _L=N_x^3(⟨ L_ bare⟩ ^2 -⟨ L_ bare^2⟩) <cit.>. The critical imaginary angular velocities, Ω_Ic's, for chiral and confinement-deconfinement phase transitions are determined according to the peaks of χ _l, disc and χ_L, respectively, which are shown in Fig. <ref> with N_τ=4 being used. It can be found that, Ω _Ic's for chiral and confinement-deconfinement phase transitions almost coincide with each other and they both decrease with decreasing temperature, exhibiting the (imaginary) rotational suppression of the critical temperatures.
Real rotation. — Since an imaginary angular velocity Ω_I is used in the simulation, an analytical continuation is needed to obtain the corresponding results for real rotation Ω. For an observable O(Ω) analytical in a domain |Ω|<1/R (R is the transverse radius of the system) on the complex Ω plane, this is achieved by the replacement Ω_I→ -iΩ. For a moment, let us suppose that the chiral condensate and the Polyakov loop are such observable. Given that both the chiral condensate and the Polyakov loop are even functions of Ω_I, our simulations suggest that a real uniform rotation enhances the chiral condensate and suppresses the Polyakov loop; that is, we find a rotational catalysis of chiral symmetry breaking and confinement of QCD by real uniform rotation at finite temperatures. The result about the chiral condensate sharply contradicts the studies based on effective models which predict a suppression of the chiral condensate by real rotation <cit.>. One possible reason is that these effective models do not include properly the contribution from gluon dynamics. A recent study based on Nambu-Jona-Lasinio model showed that, once a gluon-dressed four-fermion coupling is introduced, the enhancement of chiral condensate by real rotation can occur <cit.>. Other non-perturbative tools, e.g., the functional renormalization group and Dyson-Schwinger equation method, may provide a test for this.
Our result about the Polyakov loop extends and supports the previous results for pure SU(3) gluondynamics <cit.>, but contradicts with previous model studies <cit.> which predict a catalysis of deconfinement by real rotation. Such a contradiction makes some of these works question the validation of analytical continuation around Ω∼ 0 <cit.> [In these studies, a finite imaginary rotation is introduced by imposing a twisted boundary condition in the imaginary-time direction (e.g., for gluons, A^a_μ(τ, r, ϕ, z)=A^a_μ(τ+1/T, r, ϕ-Ω_I/T, z)) rather than by going into the rotating frame as we used.]. To test the validation of analytical continuation, we consider the Polyakov loop at quenched limit. For a small real rotation, we have the Taylor expansion ⟨ L_ bare⟩(Ω)=c_0+c_2(aΩ)^2 + ⋯ (The odd powers vanish because ⟨ L_ bare⟩(Ω) is time-reversal even). Choosing lattice coupling β=5.7 and N_x^3× N_τ=12^3× 4 as an example, we obtain c_0=0.10498(3) and c_2=(-3.0± 1.1)× 10^2, which indeed shows that a real uniform rotation suppresses the Polyakov loop. A detail comparison between the Taylor expansion and analytical continuation Ω_I→ -iΩ is given in the Appendix.
Summary. — In this paper, we studied rotating hot QCD matter using lattice approach. The rotation was implemented as simulating a rest state in a rotating frame with an imaginary angular velocity and with a special periodic boundary condition such that the xy plane is a projective plane. We computed different components of QCD angular momentum and obtained the rotational rigidity (see definition in the main text) and spin susceptibility. We observed a negative QCD rotational rigidity which may indicate a possible thermodynamic instability against uniform rotation. We also computed the Polyakov loop and chiral condensate as well as the corresponding susceptibilities. We found that the imaginary rotation suppresses both confinement-deconfinement and chiral critical temperatures. This translates to a surprising conclusion after analytic continuation to real rotation: the real uniform rotation tends to catalyze the chiral symmetry breaking and color confinement. This conflicts sharply with previous model studies. Further theoretical and numerical works are necessary to fully understand the underlying mechanism.
Acknowledgement.— We thank H.-L. Chen, K. Fukushima, K. Mameda, A. Yamamoto for useful discussions. X.-G.H. is supported by the Natural Science Foundation of China (Grant No. 12147101, No. 12225502 and No. 12075061), the National Key Research and Development Program of China (Grant No. 2022YFA1604900), and the Natural Science Foundation of Shanghai (Grant No. 20ZR1404100). J.-C. Y. is supported in part by the Natural Science Foundation of China (No. 12147214), the Natural Science Foundation of the Liaoning Scientific Committee (No. LJKZ0978).
§ APPENDIX
Lattice discretization. — The rotation is along z-axis. We discretize the gauge action S_ G as
S_ G=β/N_c∑ _n{∑ _μ >ν Re tr[1-U̅_μν(n)]-Ω_I(x Re tr[V_τ xy(n)
+V_τ zy(n)]-y Re tr[V_τ yx(n)+V_τ zx(n)])
+Ω_I^2(x^2 Re tr[1-U̅_yz(n)]+y^2 Re tr[1-U̅_xz(n)].
. +(x^2+y^2) Re tr[1-U̅_xy(n)]-xy Re tr[V_xzy(n)])},
where U̅_μν and V_μνσ are defined in Ref. <cit.> and depicted in Fig. <ref>.
The fermion action is given in Eq. (<ref>). The angular momentum density (only z component is nonzero) is discretized in the same way as the action. Under imaginary rotation, the angular momentum is also imaginary. Thus we measure the imaginary part of the angular momentum on lattice which, after analytical continuation to real rotation, gives the angular momentum under real rotation. The results for the different components of the angular momentum are
J^z_ G(n) =-a^-4β/N_c{x Re tr[V_τ x y(n)+V_τ z y(n)]
-y Re tr[V_τ y x(n)+V_τ z x](n)},
L^z_f(n) =a^-4/4(b_x,y-b_y,x),
s^z_f(n) =a^-3/8∑ _s_x,y,τ=± 1s_τη _s_ττ(n)η _xy(n) ψ̅ (n)
× U(n,n+∑ _i=x,y,τs_i î)ψ(n+∑ _i=x,y,τs_i î).
The rotational rigidities and the spin susceptibility are averaged on lattice as
ξ̅_f = 1/N_ tasteN_r_ max∑ _n_x^2+n_y^2<r_ max^2 s^z_f(n) /Ω_I,
ρ̅_J_G= 1/N_r_ max∑ _n_x^2+n_y^2<r_ max^2 J^z_G(n) /Ω_I r^2,
and similarly for ρ̅_L_f. Here, N_r_ max is the number of sites satisfying n_x^2+n_y^2<r_ max^2, N_ taste is the taste degeneracy (N_ taste=4 for L_f). We choose r_ max=6, 5, and 7 for J_G, L_f, and s_f in the simulation, respectively.
Equivalence with rotating ensemble in inertial frame. — In the main text, we have addressed that the rotation is implemented by simulating a rest state in a rotating frame. Here, we demonstrate the equivalence of this approach with simulating a rotating state in an inertial frame using the transfer matrix method <cit.>. For the sake of clarity and simplicity, we consider a pure U(1) gauge theory as an example. In this case, Eq. (<ref>) can be written as,
S_G=a_s^3 a_τ∑ _n{∑ _μ >νθ _μν^2/2.
.+Ω _I[x(θ _τ x(n)θ _xy(n)+θ _τ z(n)θ _zy(n))..
..-y(θ_τ y(n)θ _yx(n)+θ_τ z(n)θ _zx(n))].
.+Ω _I^2/2[(xθ _zy(n)-yθ _zx(n))^2+r^2θ ^2_xy(n)]},
where θ _μν(n)≡(Δ _μθ _ν(n)/a_μ)-(Δ _νθ _μ(n)/a_ν) (To avoid confusion, we use θ _μν instead of F_μν to denote U(1) gauge-field strength tensor), and Δ _μθ _ν(n)≡θ _ν(n)-θ _ν(n-μ), r^2=x^2+y^2, a_s and a_τ are lattice spacings of spatial and time directions, respectively.
In temporal gauge, θ _τ=0, θ _τ i=Δ _τθ _i(n)/a_τ, and we have
S_G=∑ _n{1/2∑ _i >jθ _ij^2(n)+1/2(Δ _τ/a_τθ _x(n)+xΩ _Iθ _xy(n))^2.
.+1/2(Δ _τ/a_τθ _y(n)+yΩ _Iθ _xy(n))^2.
.+1/2(Δ _τ/a_τθ _z(n)+xΩ _Iθ _zy(n)-yΩ _Iθ _zx(n))^2}.
Equation (<ref>) is a sum of actions which depend on only two neighboring time slices, and therefore the partition function is Z=∑_{θ}exp (-S_G)=∑_{θ}∏ _τ T(τ+1,τ) with
-log T(τ+1,τ)=a_τa_s^3∑ _ n{1/4∑ _i >jθ _ij^2+ 1/4∑ _i >jθ' _ij^2.
.+1/2a_τ^2[
(θ'_x-θ _x+xΩ _I a_τθ _xy)^2 +(θ'_y-θ _y+yΩ _I a_τθ _xy)^2] .
.+1/2a_τ^2(θ'_z-θ _z+Ω _I a_τ(xθ _zy-yθ _zx))^2 },
where θ ' ( n)= θ( n, τ+1), θ( n) = θ( n,τ).
Introducing generalized coordinate operator θ̂( n) and corresponding generalized momentum operator L̂( n) satisfying [L̂ _i( n),θ̂ _j( n')]=-iδ _ijδ _ n, n', and using
⟨θ' | exp (-1/2a_τ/a_s^3L̂^2 )exp(ia_τL̂f(θ̂))|θ⟩
= const.×exp(-a_s^3/2a_τ[θ'-θ+a_τf(θ)]^2)+𝒪(a_τ^2)
where f(θ̂) is an arbitrary function of coordinate operator only, T(τ+1,τ) can be written as an operator sandwiched with {θ'} and {θ} configurations,
T(τ,τ+1)=⟨θ' |exp{-a_s^3a_τ1/4∑ _i >jθ̂ _ij^2}
×exp{-a_τ/2a_s^3∑ _iL̂_i^2 + ixa_τΩ _IL̂_xθ̂ _xy+ iya_τΩ _IL̂_yθ̂ _xy.
. +ia_τΩ _IL̂_z(xθ̂ _zy-yθ̂ _zx) }
×exp{-a_s^3a_τ1/4∑ _i >jθ̂ _ij^2}|θ⟩ + 𝒪(a_τ^2).
As a result, T(τ+1,τ)=⟨θ '|T̂|θ⟩ + 𝒪(a_τ^2) with,
T̂=exp{a_τ(-1/2a_s^3∑ _iL̂_i^2 -a_s^31/2∑ _i >jθ̂ _ij^2..
..+ ixΩ _IL̂_xθ̂ _xy+ iyΩ _IL̂_yθ̂ _xy +iΩ _IL̂_z(xθ̂ _zy-yθ̂ _zx) )}.
Therefore, the partition function can be written as
Z=∑ _{θ ^1,θ ^2,…}⟨θ ^N_τ |T̂|θ ^N_τ-1⟩…⟨θ ^3 | T̂|θ ^2 ⟩⟨θ ^2 | T̂|θ ^1 ⟩ ,
where {θ^i} is the configuration at τ_i time slice.
By requiring periodic boundary condition in the time direction θ ^1=θ ^N_τ, Z= tr[T̂^N_τ]. Compared with Z= tr[exp(- Ĥ/T)], the Hamiltonian operator can be read out as
Ĥ=∑ _ n(1/2a_s^3∑ _iL̂_i^2 +a_s^31/2∑ _i >jθ̂ _ij^2.
.- iΩ _I( (x L̂_x + y L̂_y)θ̂ _xy+ L̂_z(xθ̂ _zy-yθ̂ _zx) )),
with T = a_τ^-1/N_τ. Equation (<ref>) is nothing but the Hamiltonian for an rotating U(1) system in inertial frame, Ĥ=Ĥ_0+iΩ _I Ĵ^z_G, with Ĥ_0 the Hamiltonian without rotation and
Ĵ^z_G = - ( (x L̂_x + y L̂_y)θ̂ _xy+ L̂_z(xθ̂ _zy-yθ̂ _zx) ).
being just the angular momentum operator. This shows that our lattice action [Eq. (<ref>)], introduced by considering a rest state in rotating frame, represents a rotating state in an inertial frame.
Spinor eigenstates for projective plane boundary condition. — In cylindrical coordinate, the general spinor eigenstates can be written as <cit.>
u=𝒩_k e^inθ+ik_zz[ J_n(k_tr); se^iθJ_n+1(k_tr); k_z-isk_t/E_k+mJ_n(k_tr); ik_t-sk_z/E_k+me^iθJ_n+1(k_tr); ],
v= 𝒩_k e^inθ-ik_zz[ k_z-isk_t/E_k+mJ_n(k_tr); ik_t-sk_z/E_k+me^iθJ_n+1(k_tr); J_n(k_tr); -se^iθJ_n+1(k_tr); ],
for particle and anti-particle modes, respectively, where k_z,t are momenta along and transverse to z-axis, n∈ℤ, s=± is the transverse helicity, J_n(x) is the Bessel function of the first kind, and 𝒩_k is a normalization factor.
The Dirichlet boundary condition, u(R)=v(R)=0, is not feasible because the zeros of J_n(x) and J_n+1(x) are different. However, noting that functions with the form f(θ, r)=e^in θ g(r) satisfy the projective plane boundary condition when n is even.
Therefore, for an even (odd) n, one can choose k_t such that J_n+1(k_tR)=0 [J_n(k_tR)=0]. In this way, we find that the spinor eigenstates u and v are compatible with the projective plane boundary condition.
Lattice parameters. — The lattice spacing is matched by measuring the static quark potential V(r) <cit.> and Sommer scale r_0 <cit.> at low temperature and zero angular velocity with projective plane boundary condition using the methods described in Refs. <cit.>. Adopting r_0=0.5 fm <cit.>, the lattice coupling β and the corresponding lattice spacings are listed in Table <ref>. Denoting the molecular-dynamics time unit as TU, when doing the matching, TU_ th trajectories are discarded for thermalization, and TU_m configurations are measured. Throughout this work, the statistical error is estimated as σ =√(2TU _ int)σ _jk where σ _jk is calculated by jackknife method, 2TU _ int is the span of TU when the two configurations can be regarded as being independent, and TU _ int is calculated by using the autocorrelation method with S=1.5 <cit.> on the bare chiral condensate of light quarks defined as ⟨ψ̅ _lψ _l⟩ = - tr[D_l^-1] / 4 N_x^3N_τ.
When the rotation is turned on, for each β, 1.5TU_ th+(TU_ th+TU_m)× (K+1) trajectories with K=8 are simulated sequentially with increasing Ω_I=ΔΩ_I × k, 0≤ k≤ K. Here, in the case of N_τ=6, TU_ th=100, TU_m=1900, 3400 for β=4.98,5.1, respectively, and TU_m=1400 for other β's in Table <ref> and in β=4.86∼ 5.1, and in the case of N_τ=4, TU_ th=100, TU_m=900 for β=4.72∼ 4.98, and ΔΩ_I = Ω _I max / K with Ω _I max approximately the maximally allowed angular velocity on the lattice when Wick transformed into real rotation. Therefore, with N_x=12, aΩ _I max = 0.128 is used such that the maximal linear velocity is aΩ _I max× 11√(2)/2 ≈ 0.996<1.
Small real angular velocity and Taylor expansion. — Consider a real rotation with angular velocity Ω along z-axis. The action is expanded as S=S_0+aΩ S_Ω+(aΩ)^2 S_Ω^2 + ⋯. For an operator O that is time-reversal even and does not depend on Ω explicitly (the operators for chiral condensate and Polyakov loop are both such operators), we have the following Taylor expansion for ⟨ O⟩,
⟨ O⟩(Ω)= c_0+c_2(aΩ)^2 + 𝒪(aΩ)^4.
The coefficients c_0=⟨ O⟩_0 and
c_2 =1/2d^2⟨ O⟩/d(aΩ)^2|_Ω=0
= 1/2⟨ O⟩ _0 ⟨ (2S_Ω^2-S_Ω^2)⟩ _0 - 1/2⟨ O (2S_Ω^2-S_Ω^2) ⟩ _0 ,
where ⟨…⟩ _0 denotes average at Ω = 0. For O=L_ ren, using quenched approximation at lattice coupling β = 5.7 on a N_x^3× N_τ=12^3× 4 lattice (temperature T=296.5(5) MeV) with the projective plane boundary condition, 3× 10^6 configurations are generated, and we find
⟨ L_ bare⟩ =0.10498(3)+ (-3.0± 1.1) × 10^2(aΩ)^2+ 𝒪(aΩ)^4,
which supports the conclusion that the real rotation drives the rotating hot QCD towards the confinement phase. Supposing that an analytical continuation to imaginary rotation by replacing Ω with iΩ_I is allowed, we then make a comparison with the simulation directly performed for imaginary rotation, see Fig. <ref>. It can be seen that, ∂ ^2 L_ ren / ∂ (aΩ) ^2 ≈ -10^3, which implies that the Taylor expansion is poorly converged, and the result of Taylor expansion is reliable only when aΩ≪ 1/√(10^3), which explains the deviation of analytical continuation from the result of direct simulation with imaginary rotation in Fig. <ref>. Nevertheless, the trends and orders of magnitude are consistent between the two methods.
|
http://arxiv.org/abs/2307.04323v1 | 20230710033248 | Optimal $(2,δ)$ Locally Repairable Codes via Punctured Simplex Codes | [
"Dong Wang",
"Weijun Fang",
"Sihuang Hu"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
Optimal (2,δ) Locally Repairable Codes via Punctured Simplex Codes
Dong Wang12,
Weijun Fang124,
and Sihuang Hu124
1
Key Laboratory of Cryptologic Technology and Information Security, Ministry of Education,
Shandong University, Qingdao, 266237, China,
[email protected]
2
School of Cyber Science and Technology, Shandong University, Qingdao, 266237, China, {fwj, husihuang}@sdu.edu.cn
4
Quancheng Laboratory, Jinan 250103, China
==================================================================================================================================================================================================================================================================================================================================================================================================================
This research is supported in part by National Key Research and Development Program of China under Grant Nos. 2021YFA1001000 and 2022YFA1004900, the National Natural Science Foundation of China under Grant No. 62201322, the Natural Science Foundation of Shandong under Grant No. ZR2022QA031. (Corresponding Author: Weijun Fang)
Locally repairable codes (LRCs) have attracted a lot of attention due to their applications
in distributed storage systems. In this paper, we provide new constructions of optimal (2, δ)-LRCs.
Firstly, by the techniques of finite geometry, we present a sufficient condition to guarantee a punctured simplex code to be a (2, δ)-LRC.
Secondly, by using characteristic sums over finite fields and Krawtchouk polynomials, we construct several families of LRCs with new parameters.
All of our new LRCs are optimal with respect to the generalized Cadambe-Mazumdar bound.
§ INTRODUCTION
In order to ensure the reliability of nodes in large-scale distributed
storage systems, the concept of locally repairable codes was first proposed in <cit.>.
Let [n]={1,2,...,n}, for a linear code C of length n over the finite field , a code symbol c_i of C has locality r
if there exists a subset R_i⊆ [n] such that i∈ R_i,|R_i|≤ r+1
and c_i is a linear combination of {c_j}_j∈ R_i\{i} over .
If each symbol of a codeword in C has locality r, then C is called
a locally repairable code with locality r or an r-LRC. However, when multiple node failures happens in a distributed storage
system, the r-LRCs can not recover failed nodes successfully. To address this problem, Prakash et al. <cit.> extended the concept of r-LRCs to
(r,δ)-LRCs which can tolerate any δ-1 erasures. A code symbol c_i of C has locality
(r,δ) if there exists a subset R_i⊆ [n] such that i∈ R_i,|R_i|≤ r+δ-1
and d(C|_R_i)≥δ where C|_R_i is the punctured code on the set [n]\ R_i.
The code C is called an (r,δ)-LRC if all code symbols have locality (r,δ). Obviously when δ=2,(r,δ)-LRCs reduce to r-LRCs.
§.§ Known Results about (r,δ)-LRCs
In <cit.>, analogous to the classical Singleton bound for general codes, the following Singleton-type bound
for an (r,δ)-LRC with parameters [n,k,d] is given as
d≤ n-k+1-(⌈ k/r⌉ -1)(δ -1).
If an (r,δ)-LRC achieves the Singleton-type bound (singletonBound) with equality, then the code
is called a Singleton-optimal (r,δ)-LRC. Due to its interesting
algebraic structures and
practical applications in distributed storage systems, several constructions of Singleton-optimal (r,δ)-LRCs
have been proposed in <cit.>.
Note that the Singleton-type bound is independent of the field size. In <cit.>, Cadambe and Mazumdar derived the first field-dependent bound for q-ary r-LRCs with parameters [n,k,d],
k≤min_s∈ℤ_+{sr+k_opt^(q)(n-s(r+1),d)},
where k_opt^(q)(n,d) is the maximum dimension of a q-ary linear code of length n and minimum distance d. The generalized Cadambe-Mazumdar bound was considered in <cit.>, which stated that for a q-ary (r,δ)-LRC with parameters [n,k,d],
k≤min_s∈ℤ_+{sr+k_opt^(q)(n-s(r+δ-1),d)}.
We call a code achieving the generalized C-M bound (CMbound_rdeltaLRC) with equality as a k-optimal (r,δ)-LRC.
In <cit.>, the authors proved that the simplex code is a k-optimal 2-LRC. By deleting some columns from
the generator matrix of the simplex code, several new families of k-optimal LRCs with localities 2 or 3 were proposed in <cit.> and
<cit.>. In <cit.>, Luo et al. presented several binary k-optimal 2-LRCs by deleting or adding some columns from a binary simplex code and used character sums to determine their parameters.
Motivated by works of <cit.>, Luo et al. constructed a family of p-ary
linear codes and demonstrated that they are k-optimal 2-LRCs in some cases. Tan et al.<cit.>
determined the locality of some known linear codes and showed that many of these codes are k-optimal.
§.§ Our Contributions and Techniques
In this paper, we focus on new constructions of (2, δ)-LRCs. We follow the construction of linear codes presented in <cit.>. This construction has been applied to secret sharing schemes or LRCs by many researchers <cit.>.
It is more intuitive to describe the properties of (2,δ)-LRCs in the language of finite geometry,
this converts the analysis of locality into how many lines pass through a point in projective geometry.
From the finite geometry point of view, we give a simple but useful sufficient condition to guarantee a linear code to be a (2,δ)-LRC (see Theorem suffi_condi_lrc). We generalize some results proposed by Luo et al.<cit.> (see Theorems thm_generaliz_luo, thm_generaliz_luo_loc and loc_gen_luo) and Silberstein
et al.<cit.> (see Theorem gen_wt2). In particular, we extend the p-ary linear codes presented in <cit.> to the q-ary linear codes, where
p is a prime and q is the power of p, and determine their locality in some cases. Motivated by Silberstein's work on
r-LRCs, we utilize Krawtchouk polynomials to determine the parameters of some punctured simplex codes. Specifically speaking, if the punctured
columns from the generator matrix of a simplex code have certain weight, then determining the minimum distance of the punctured simplex code
is equivalent to determining the minimum value of Krawtchouk polynomials.
Then we construct two infinite families of k-optimal (2,q)-LRCs.
Our constructions are generalizations of the results of <cit.>.
Moreover, all our new LRCs are k-optimal with respect to the generalized C-M bound.
The rest of this paper is organized as follows. In Section II, we recall a general construction of linear codes given by Ding et al.<cit.>, and some basic notation and results on finite geometry and Krawtchouk polynomials.
In Section III, we consider (2,δ)-LRCs and present three infinite families of k-optimal (2,δ)-LRCs.
Section IV concludes the paper.
§ PRELIMINARIES
§.§ A General Construction of Linear Codes
In this subsection, we describe a general construction of linear code which was given by Ding et al.<cit.>.
Let m be a positive integer, q a power of some prime p,
𝔽 _q the finite field containing q elements and ^m the vector space over of dimension m. For any vector x=
(x_1,x_2,⋯,x_m)∈^m, the Hamming weight of x is given as wt(x)=|{1≤ i≤ m:x_i≠ 0}|. We let tr_q^m/q(·) be the trace function from 𝔽_q^m to and tr(·) the absolute trace function from to .
Ding et al.<cit.> established a general construction of linear codes, which says that if D={d_1,...,d_n} is a nonempty subset of 𝔽 _q^m,
a q-ary linear code of length n is constructed by
C_D={c_x=(tr_q^m/q(xd_1),⋯,tr_q^m/q(xd_n)):x∈𝔽 _q^m}.
If D={ d_1, d_2, ⋯, d_n} is a nonempty subset of 𝔽^m_q, then the above construction (<ref>) can be modified to
C_D={c_x=(x· d_1,...,x· d_n):x∈^m},
where x· d_i is the Euclidean inner product of x and d_i.
Using character sums over finite fields, we can compute the parameters of those constructed codes.
Assume that ω_p is
the primitive p^th root of unity in the complex number field ℂ, then for a∈, the additive character χ_a from to is defined as
χ_a(c)=ω_p^tr(ac), for all c∈.
If a=0, then ∑_c∈χ_a(c)=q; otherwise ∑_c∈χ_a(c)=0 (<cit.>).
The following two bounds are useful in subsequent sections.
Let C be a q-ary [n,k,d] linear code, then
n≥∑_i=0^k-1⌈d/q^i⌉.
A linear code achieving the Griesmer bound with equality is called a Griesmer code.
Let C be a q-ary code with M codewords, length n and minimum distance d. If qd>(q-1)n, then
M≤qd/qd-(q-1)n.
§.§ Finite Geometry
The projective space
PG(m-1, q) over 𝔽_q is the geometry whose points, lines, planes, ⋯ , hyperplanes are the
subspaces of 𝔽^m_q of dimension 1, 2, 3, ⋯ , m-1. So, we also use a nonzero vector g ∈𝔽^m_q to denote the point in PG(m-1, q). Two nonzero vectors g_1 and g_2 are the same point in PG(m-1, q) if and only if g_1=λ g_2 for some λ∈𝔽^*_q. Note that when we replace g_i by λ g_i for some λ∈𝔽^*_q, the parameters of the code given by Eq. (<ref>) do not change. So we rewrite the code construction given in Eq. (<ref>) via the language of projective geometry as follows. Suppose D={ d_1, d_2, ⋯, d_n} is a nonempty subset of PG(m-1,q), then a q-ary linear code of length n is constructed by
C_D={c_x=(x· d_1,...,x· d_n):x∈^m}.
In this paper, we will use Eq. (<ref>) to construct optimal LRCs.
Note that when D=PG(m-1,q), C_D is the famous simplex code. Thus in this sense, for general nonempty subset D, the code C_D is the punctured code of the simplex code.
We let the points in PG(m-1,q) be the vectors in ^m that the first nonzero coordinate is 1 for simplicity. If A is a nonempty subset of [m], we let P_[m]=PG(m-1,q) and P_A be the subset of PG(m-1,q) that the coordinates outside of A are 0. It is easy to see that
|P_A|=q^|A|-1/q-1,⋃_α∈^*α P_A=L_A^* where
α P_A={αa:a∈ P_A} and
L_A={(a_1,...,a_m)∈𝔽 _q^m:a_i=0 if i∉ A}.
For any two subsets A_1,A_2 of
[m], the intersection of P_A_1 and P_A_2 is equal to P_A_1∩ A_2, where P_∅ =∅ .
§.§ Krawtchouk Polynomials
In this subsection, we briefly review some basic results of Krawtchouk polynomials.
Given positive integers n,q, and suppose 0 ≤ k ≤ n, the Krawtchouk polynomial of degree k is defined as<cit.>
K_k(x;n,q)=K_k(x)=∑_j=0^k(-1)^jxjn-xk-j(q-1)^k-j.
The following lemma is a slight modification of <cit.>.
Let a and s be positive integers and x be a vector of length m over with wt(x)=a. Then
we have
∑_y∈^m,wt(y)=sω _p^tr(x·y)=K_s(a;m,q).
§ (2,Δ)-LRCS FROM PUNCTURED SIMPLEX CODES
In this section, we will provide several constructions of LRCs via punctured simplex codes. Firstly, we give a simple lemma which will be used to determine the locality of linear codes.
Let δ≥ 2 be an integer, g_1,⋯,g_δ+1 be δ+1 distinct collinear points in PG(m-1,q). Let C be the linear code with the generator matrix G=[g_1 ... g_δ+1], then C is a q-ary [δ+1, 2, δ]-MDS code.
Since any two of g_1,⋯,g_δ+1 are linearly independent and any three of g_1,⋯,g_δ+1 are linearly dependent, we have rank(G)=2. Thus (C)=2 and (C^⊥)=δ-1. On the other hand, G is the parity-check matrix of C^⊥, thus d(C^⊥) ≥ 3. By the Singleton bound, d(C^⊥) ≤δ+1-(δ-1)+1=3, hence C^⊥ is a [δ+1,δ-1,3]-MDS code. So C is a [δ+1,2,δ]-MDS code.
In the following, we give a sufficient condition which guarantees that a punctured simplex code is a (2,δ)-LRC.
Suppose 2≤δ≤ q and D is a subset of PG(m-1, q). If |D|≤q^m-1-1/q-1(q+1-δ)-1, then the code C_D^c given in Eq. (<ref>) is a q-ary (2,δ)-LRC, where D^c=PG(m-1,q)∖ D.
For any point g∈ D^c, there are q^m-1-1/q-1 lines in PG(m-1,q) containing g, and each line has q+1 points. Since |D|≤q^m-1-1/q-1(q+1-δ)-1, by the Pigeonhole Principle, there exists at least one line L containing g, such that there are δ+1 points g_1= g, g_2, ⋯, g_δ+1 of L belonging to the subset D^c. By Lemma <ref>, d((C_D^c)_|E)=δ, where E={ g_1, g_2, ⋯, g_δ+1}. Hence the code C_D^c has (2, δ)-locality.
When D=∅, then C_D^c is the q-ary simplex code. From Theorem <ref>, we know that the q-ary simplex codes have locality (2,q).
In particular, to ensure the code C_D^c to be a 2-LRC, it only needs to satisfy that |D| ≤ q^m-1-2.
Thus our method is simpler than that in <cit.>.
Hyun et al.<cit.> constructed infinite families of binary Griesmer codes punctured by unions of projective spaces, and Luo et al.<cit.> obtained similar results of linear codes over 𝔽_p. In the following, we extend their results to general q-ary codes.
Let m,t>1 be positive integers. Assume that A_1,...,A_t are nonempty subsets of [m] satisfying A_i∩ A_j=∅ for any i≠ j∈[t].
Let D=∪ _i=1^tP_A_i and D^c=P_[m]\ D, then the code C_D^c defined by Eq. (<ref>) is a q-ary linear code with parameters [q^m-1/q-1-∑_i=1^tq^|A_i|-t/q-1,m,q^m-1-∑_i=1^tq^|A_i|-1]. Furthermore, assume that |A_1|=...=|A_i_1|=s_1,|A_i_1+1|=...=|A_i_2|=s_2,
...,|A_i_u-1+1|=...=|A_i_u|=s_u where s_1<s_2<...<s_u. If max{i_1,i_2-i_1,...,i_u-i_u-1}≤ q-1, then C_D^c is a Griesmer code.
Note that P_A_i∩ P_A_j=P_A_i∩ A_j=∅ for any i≠ j∈[t], so we have |D|=∑_i=1^t|P_A_i|=∑_i=1^tq^|A_i|-t/q-1, thus the length of C_D^c is q^m-1/q-1-∑_i=1^tq^|A_i|-t/q-1.
Let x=(x_1,...,x_m) be any nonzero vector of ^m, then
wt(c_x) =|D^c|-|{d∈ D^c|x· d=0}|
=|D^c|-∑_d∈ D^c1/q∑_y∈ω_p^tr(yx· d)
=q-1/q|D^c|-1/q∑_d∈ D^c∑_y∈^*ω_p^tr(yx· d)
=q-1/q|D^c|-1/q∑_d∈^m^*ω_p^tr(x· d)+1/q∑_d∈ D∑_y∈^*ω_p^tr(yx· d).
Note that
∑_d∈^m^*ω_p^tr(x· d) =∑_d_1∈⋯∑_d_m∈ω_p^tr(x_1d_1)⋯ω_p^tr(x_md_m)-1
=∏_i=1^m(∑_d_i∈ω_p^tr(x_id_i))-1=-1,
where d=(d_1,...,d_m) and
∑_d∈ D∑_y∈^*ω_p^tr(yx· d) =∑_i=1^t∑_d∈ P_A_i∑_y∈^*ω_p^tr(x·(yd))
=∑_i=1^t∑_d∈^|A_i|^*ω_p^tr(x_A_i·d).
As
∑_d∈^|A_i|^*ω_p^tr(x_A_i·d)=
q^|A_i|-1,x_A_i=0
-1,x_A_i≠0
for 1≤ i≤ t, then the minimum weight is min_x∈^m^*wt(c_x)=q-1/q|D^c|+1/q-t/q
=q^m-1-∑_i=1^tq^|A_i|-1.
It is easy to prove that q^m-∑_i=1^tq^|A_i|>0 since ∑_i=1^t |A_i|≤ m. Thus wt(c_x)=0 if and only if x= 0, hence the dimension is m.
Suppose ∑_i=1^tq^|A_i|-1=∑_i=g^hb_iq^i, where 0≤ b_i≤ q-1,i=g,...,h. Then
∑_i=0^m-1⌈q^m-1-∑_j=1^tq^|A_j|-1/q^i⌉=∑_i=0^m-1⌈q^m-1-∑_j=g^hb_jq^j/q^i⌉
=∑_i=0^m-1q^m-1-i-∑_i=0^g∑_j=g^hb_jq^j-i-∑_i=g+1^h∑_j=i^hb_iq^j-i
=q^m-1/q-1-∑_i=1^tq^|A_i|-∑_i=g^hb_i/q-1.
As max{i_1,i_2-i_1,...,i_u-i_u-1}≤ q-1,∑_i=g^hb_i=i_1+i_2-i_1+...+i_u-i_u-1=i_u=t. The length of C_D^c is q^m-1/q-1-|D|=q^m-1/q-1-∑_i=1^tq^|A_i|-∑_i=g^hb_i/q-1, hence the code C_D^c is a Griesmer code.
We now investigate the locality of the codes given in Theorem thm_generaliz_luo.
Keep the notation as in Theorem thm_generaliz_luo. If t=2 and |A_i|≤ m-2 for all i∈[t], then the code C_D^c has locality (2,q); if t≥ 3 and m≥ 4, then the code C_D^c has locality (2,q); if m>t=2,q>2 and |A_1|=m-1, then the code C_D^c has locality (2,q-1).
Case 1: t=2, |A_i|≤ m-2,i=1,2.
If m≥ 4, then |D|=q^|A_1|+q^|A_2|-2/q-1≤q^2+q^m-2-2/q-1≤q^m-1-q/q-1; if m=3, then
|D|=q^|A_1|+q^|A_2|-2/q-1=2≤q^2-q/q-1. By Theorem <ref>, we can deduce that the code C_D^c has locality (2,q).
Case 2: m>3 and t>2. Note that
|D|=∑_i=1^tq^|A_i|-t/q-1≤q^m-t+1+(t-1)q-t/q-1=q^m-t+1+t(q-1)-q/q-1≤q^m-t+1+m(q-1)-q/q-1≤q^m-2+q^m-2(q-1)-q/q-1=q^m-1-q/q-1. By Theorem <ref>, we can deduce that the code C_D^c has locality (2,q).
Case 3: m>t=2, |A_1|=m-1,|A_2|=1,q>2. Note that |D|=q^m-1+q-2/q-1≤2q^m-1-2/q-1-1. By Theorem <ref>, we can deduce that the code C_D^c has locality (2,q-1).
Keep the notation as in Theorems thm_generaliz_luo and thm_generaliz_luo_loc, the code C_D^c is k-optimal LRC with respect to the bound (<ref>).
Case 1: t=2, |A_i|≤ m-2,i=1,2.
We let n'=q^m-1/q-1-q^|A_1|+q^|A_2|-2/q-1-q-1,d=q^m-1-q^|A_1|-1-q^|A_2|-1, according to the Plotkin bound, we have
k_opt^(q)(n',d) ≤⌊log_qq(q^m-1-q^|A_1|-1-q^|A_2|-1)/q^2-2⌋
≤⌊log_q(q^m-1-q^|A_1|-1-q^|A_2|-1)⌋
≤ m-2.
Utilizing the bound (CMbound_rdeltaLRC) with s=1, we obtain that k≤ 2+k_opt^(q)(n',d)≤ m. Therefore, the code C_D^c is k-optimal.
Case 2: m>3 and t>2. The Griesmer code C_D^c has parameters [n=q^m-∑_i=1^tq^|A_i|+t-1/q-1,k=m,d=q^m-1-∑_i=1^tq^|A_i|-1].
When q>2, we have ⌈q^m-1-∑_i=1^tq^|A_i|-1/q^m-2⌉≥⌈ q-t-1+q^m-t/q^m-2⌉≥⌈ q-m-1+q^m-3/q^m-2⌉=q and ⌈q^m-1-∑_i=1^tq^|A_i|-1/q^m-1⌉=1. So we have
n-1=∑_i=0^m-2⌈d/q^i⌉,n-q-1≥∑_i=0^m-3⌈d/q^i⌉.
Using the Griesmer bound, we obtain that k_opt^(q)(n-1-q,d)≤ m-2, thus
k≤ 2+k_opt^(q)(n-1-q,d)≤ m and the code C_D^c is k-optimal. When q=2, since A_1,...,A_t are mutually disjoint and max{i_1,i_2-i_1,...,i_u-i_u-1}≤ 1, we obtain that ⌈2^m-1-∑_i=1^t2^|A_i|-1/2^m-2⌉≥ 2. According to the similar arguments, k≤ 2+k_opt^(q)(n-1-q,d)≤ m and the code C_D^c is k-optimal.
Case 3: m>t=2, |A_1|=m-1,|A_2|=1,q>2.
We let n'=q^m-1/q-1-q^m-1+q-2/q-1-q,d=q^m-1-q^m-2-1, according to the Plotkin bound, we have
k_opt^(q)(n',d) ≤⌊log_qq(q^m-1-q^m-2-1)/q^2-q-1⌋
≤⌊log_q(q^m-1-q^m-2-1)⌋
≤ m-2.
Utilizing the bound (CMbound_rdeltaLRC) with s=1, we obtain that k≤ 2+k_opt^(q)(n',d)≤ m. Therefore, the code C_D^c is k-optimal.
Let q=4,m=3 and A_1={1},A_2={2,3}, then C_D^c defined in Theorem thm_generaliz_luo is a 4-ary Griesmer code [15,3,11] with a generator matrix G=(G_1 G_2), where
G_1=[ 1 1 1 1 1 1 1; 1 α α+1 0 1 α α+1; 0 0 0 1 1 1 1 ],G_2=[ 1 1 1 1 1 1 1 1; 0 1 α α+1 0 1 α α+1; α α α α α+1 α+1 α+1 α+1 ] and α is a primitive element in 𝔽_4. Then C_D^c is a (2, 3)-LRC. For instance, one can see that the columns (1,1,0)^T,(1,0,1)^T,(1,α,α+1)^T,(1,α+1,α)^T
of G generate a [4,2,3]-code. Hence the first symbol of C_D^c has locality (2,3).
Note that k^(4)_opt(11,11)=1, thus C_D^c attains the generalized C-M bound (<ref>).
In the following, we consider another family of punctured simplex codes, which is motivated by<cit.>.
Let A⊆ [m],|A|=s≥ 3,D={d∈ P_A: wt(d)=2},D^c=P_[m]\ D, then the code C_D^c defined in Eq. (<ref>) is a q-ary k-optimal (2,q)-LRC with parameters [n,k,d]=[q^m-1/q-1-(q-1)s2,m,q^m-1-⌊(2(s-1)(q-1)+q)^2/8q⌋]
providing that s2(q-1)≤q^m-1-q/q-1 and 0<qd/qd-(q-1)(n-q-1)<q^m-1.
There are (q-1)^2s2 vectors in L_A with Hamming weight 2, so |D|=(q-1)s2 since wt(a)=wt(λa) if and only if λ∈^*.
∑_d∈ D∑_y∈^*ω_p^tr(yx· d) =∑_d∈^s,wt(d)=2ω_p^tr(x_A·d)
=K_2(a;s,q)
=q^2/2a^2-(2(q-1)qs+q(2-q)/2)a
+s2(q-1)^2.
Thus the minimum weight of C_D^c corresponding to x is
min_x∈^m^*wt(c_x)
=⌈ q^m-1-(q-1)^2/qs2-4s(q-1)+(q-2)^2/8q⌉
=q^m-1-⌊(q-1)^2/qs2 +4s(q-1)+(q-2)^2/8q⌋
=q^m-1-⌊(2(s-1)(q-1)+q)^2/8q⌋
according to the Eq. (calcu_min_weight).
According to Theorem suffi_condi_lrc, the code has locality (2,q). Using the Plotkin bound, we obtain that
k_opt^(q)(n-q-1,d) ≤⌊log_qqd/qd-(q-1)(n-q-1)⌋
≤ m-2.
Thus the code C_D^c is a k-optimal (2,q)-LRC according to the generalized C-M bound (CMbound_rdeltaLRC).
By the techniques of graph theory, the authors in <cit.> have obtained these codes as 2-LRCs. And they only proved that these codes are k-optimal 2-LRCs for s=3,m≥ 3,2≤ q≤ 14. However, in Theorem thm_weight2, we prove that all the codes in <cit.> are actually k-optimal (2,q)-LRCs.
Let A_1,A_2,...,A_t⊆ [m],|A_i|=s_i≥ 3 for i∈[t],D_i={d∈ P_A_i|wt(x)=2},D=⋃_i∈[t] D_i,D^c=P_[m]\ D. If |A_i∩ A_j|≤ 1 for all i≠ j∈[t], and (q-1)∑_i=1^ts_i2≤q^m-1-q/q-1,
then the code C_D^c defined in (gen_linear_constr) is a q-ary k-optimal (2,q)-LRC with parameters
[n,k,d]=[q^m-1/q-1-(q-1)∑_i=1^ts_i2,m,q^m-1-Δ]
providing that 0<qd/qd-(q-1)(n-q-1)<q^m-1 where Δ=⌊∑_i=1^t (2(s_i-1)(q-1)+q)^2/8q⌋.
For all i≠ j∈[t], D_i∩ D_j={d∈ P_A_i∩ A_j|wt(d)=2}=∅ since |A_i∩ A_j|≤ 1. |D|=∑_i=1^t|D_i|=(q-1)∑_i=1^ts_i2.
∑_d∈ D∑_y∈^*ω_p^tr(yd· x) =∑_i=1^t∑_d∈ D_i∑_y∈^*ω_p^tr(yd· x)
=∑_i=1^t∑_d∈^s_i,wt(d)=2ω_p^tr(x_A_i·d)
=K_2(w_1;s_1,q)+...+K_2(w_t;s_t,q)
where w_i=wt(x_A_i). Thus the weight of a codeword corresponding to some nonzero x is
wt(c_x) =(q^m-1/q-1-(q-1)∑_i=1^ts_i2)q-1/q
+1/q+1/q∑_i=1^tK_2(w_i;s_i,q)
=q^m-1-(q-1)^2/q∑_i=1^ts_i2+1/q∑_i=1^tK_2(w_i;s_i,q).
Note that the axis of symmetry of K_2(w_i;s_i,q) is 2(q-1)s_i+2-q/2q=1/q(1-s_i)+s_i-1/2≥s_i/2>1 for all prime power q, thus w_i≥1 when K_2(w_i;s_i,q) get the minimum value.
Meanwhile, |A_i∩ A_j|≤ 1 for all i≠ j∈[t], so K_2(w_i;s_i,q) get the minimum value simultaneously for all i∈[t]. The minimum weight of c_x is
min_x∈^m^*wt(c_x) =⌈ q^m-1-(q-1)^2/q∑_i=1^ts_i2.
. -∑_i=1^t4s_i(q-1)+(q-2)^2/8q⌉
=q^m-1-⌊∑_i=1^t(2(s_i-1)(q-1)+q)^2/8q⌋.
According to Theorem suffi_condi_lrc, the code has locality (2,q). Using the Plotkin bound, we obtain that
k_opt^(q)(n-q-1,d) ≤⌊log_qqd/qd-(q-1)(n-q-1)⌋
≤ m-2.
The code C_D^c is a k-optimal (2,q)-LRC according to (CMbound_rdeltaLRC).
Let q=2,m=5,A_1={1,2,3} and A_2={3,4,5}. By the SageMath software, the binary code C_D^c defined in Theorem gen_wt2 has parameters [25,5,12] and the generator matrix is G=(G_1 G_2) where G_1=[ 1 0 0 1 0 1 0 1 1 0 1 0; 0 1 0 1 0 0 1 1 0 1 1 0; 0 0 1 1 0 0 0 0 1 1 1 0; 0 0 0 0 1 1 1 1 1 1 1 0; 0 0 0 0 0 0 0 0 0 0 0 1 ],G_2=[ 1 0 1 1 0 1 0 0 1 0 1 0 1; 0 1 1 0 1 1 0 1 1 0 0 1 1; 0 0 0 1 1 1 0 0 0 1 1 1 1; 0 0 0 0 0 0 1 1 1 1 1 1 1; 1 1 1 1 1 1 1 1 1 1 1 1 1 ]. By the Plotkin bound, k^(2)_opt(22,12) ≤⌊log_2 12 ⌋=3. Hence C_D^c is a k-optimal 2-LRC achieving the C-M bound.
§ CONCLUDING REMARKS
In this paper, we have investigated new constructions of optimal (2, δ)-LRCs via punctured simplex codes. By using the language of finite geometry, we propose a simple but useful condition to ensure that a linear code has (2,δ)-locality. According to some characteristic sums and Krawtchouk polynomials, we obtain several infinite families of q-ary (2,δ)-LRCs. All these codes are optimal with respect to the generalized C-M bound. We not only generalize some previous results of 2-LRCs to the (2, δ)-LRCs, but also construct some new optimal (2, δ)-LRCs which are not optimal in the sense of 2-LRCs.
It is interesting to find more new optimal (2, δ)-LRCs and generalize these results to the (r,δ)-LRCs with r ≥ 3 in the future.
IEEEtran
|
http://arxiv.org/abs/2307.04494v1 | 20230710113346 | Enabling Faster Locomotion of Planetary Rovers with a Mechanically-Hybrid Suspension | [
"David Rodríguez-Martínez",
"Kentaro Uno",
"Kenta Sawa",
"Masahiro Uda",
"Gen Kudo",
"Gustavo Hernan Diaz",
"Ayumi Umemura",
"Shreya Santra",
"Kazuya Yoshida"
] | cs.RO | [
"cs.RO"
] |
RCS-based Quasi-Deterministic Ray Tracing for Statistical Channel Modeling
Javad Ebrahimizadeh, Evgenii Vinogradov, Guy A.E. Vandenbosch J. Ebrahimizadeh and G. Vandenbosch are with WaveCoRE of the Department of Electrical Engineering (ESAT), KU Leuven, Leuven, Belgium. E-mail: {Javad.Ebrahimizade,Guy.Vandenbosch}@kuleuven.be
E. Vinogradov is with ESAT, KU Leuven, Leuven, Belgium, also with Autonomous Robotics Research Center, Technology Innovation Institute (TII), Abu Dhabi, UAE. E-mail: [email protected].
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
empty
The exploration of the lunar poles and the collection of samples from the martian surface are characterized by shorter time windows demanding increased autonomy and speeds. Autonomous mobile robots must intrinsically cope with a wider range of disturbances. Faster off-road navigation has been explored for terrestrial applications but the combined effects of increased speeds and reduced gravity fields are yet to be fully studied. In this paper, we design and demonstrate a novel fully passive suspension design for wheeled planetary robots, which couples a high-range passive rocker with elastic in-wheel coil-over shock absorbers. The design was initially conceived and verified in a reduced-gravity (1.625 m/s^2) simulated environment, where three different passive suspension configurations were evaluated against a set of challenges—climbing steep slopes and surmounting unexpected obstacles like rocks and outcrops–-and later prototyped and validated in a series of field tests. The proposed mechanically-hybrid suspension proves to mitigate more effectively the negative effects (high-frequency/high-amplitude vibrations and impact loads) of faster locomotion (>1 m/s) over unstructured terrains under varied gravity fields. This lowers the demand on navigation and control systems, impacting the efficiency of exploration missions in the years to come.
§ INTRODUCTION
Robots have eased many of the tasks performed in space. They have assisted humans in building habitable labs in low-Earth orbit and have traversed the deserted lands of Mars in the name of science. Upcoming exploration missions and currently road-mapped space activities require, however, robots capable of performing in domains for which new technological innovations are necessary.
The growing interest in exploring the lunar poles serves as a good example. Hydrogen-rich elements and other volatiles have been identified within the surface and subsurface layers of polar regolith <cit.>. The extraction and use of these compounds could prove essential for the long-term sustainable exploration of space. But unlike equatorial regions previously visited, the poles of the Moon harbor extreme terrain elevation changes, day-night temperature fluctuations of more than 300 K, and a large number of regions rarely struck by natural illumination—all while constrained by a sun that at times barely rises above the horizon <cit.>. These features demand faster, more effective, and highly autonomous robotic platforms capable of coping with a wide range of environmental constraints unfaced by previous missions.
§.§ Contributions
In this paper, we present the first prototype of a new fully passive suspension design capable of enabling planetary robots to safely negotiate unstructured terrains at speeds that approach 1 m/s (see fig:ex1)—two orders of magnitude larger than conventional rover speeds. We strive to understand what effects the combination of increasing speeds and a reduced gravity field has on the locomotor performance of rovers while addressing the following questions:
* What is the level of perturbations endured by free-balancing suspensions when facing some of the salient, unavoidable features of the lunar surface at 1 m/s?
* What degree of improvement could be obtained from the addition of passive energy-dissipation devices?
* Which passive suspension configuration provides the best results?
§.§ Background
Passive, inelastic, free-balancing suspensions in 4-to-8-wheeled chassis configurations have been employed by most of the rovers commissioned to explore the Moon and Mars. These suspensions are optimized for supporting and evenly distributing the weight of the rover, allowing it to overcome irregular terrains and obstacles, and mitigating the effects of impacts and vibrations while isolating the sensitive optics and electronics from these unwanted effects. Additionally, the suspensions of planetary robotic platforms are heavily constrained in terms of mass, volume, and power, making the rocker-bogie (RB) suspension <cit.> the most widely used type of suspension design.
First developed in the frame of NASA's Mars Pathfinder mission <cit.>, the RB suspension consists of a mechanism of two linkages (see Fig. <ref>). In the most commonly used configuration, a larger forward linkage called the rocker is fixed to the front wheel at one end and attached to the smaller rearward linkage, the bogie, at the other end through a free-rotating pivot point. Intended for 6-wheel configurations, the middle and rear wheels are each linked to both ends of the bogie. The rockers of both sides are connected together and attached to the chassis through a differential that maintains the body of the rover at a pitch angle equal to the average rotation of the two rockers.
The RB suspension effectively accomplishes the main functions previously described. Irregular topography and obstacles of a size comparable to the wheel diameter can be overcome without losing contact with the ground. NASA's Sojourner, which presented a reversed RB configuration (i.e., bogie facing forward), Mars Exploration Rovers (Spirit and Opportunity), Mars Science Laboratory (Curiosity), the newest Mars-2020 rover (Perseverance), and China National Space Administration's (CNSA) Yutu-2 rover were all designed with an RB suspension. At the same time, these missions have been characterized, however, by one significant limitation: speed.
The demand for rovers capable of operating at speeds much higher than the ones previously considered is rapidly growing: from speeds of just a few cm/s to ones on the order of 1 m/s <cit.>. At the same time, missions continuously required increased levels of autonomy, which consequently means systems need to reliably cope with a higher degree of perturbations. This must be accomplished while maintaining the mechanical simplicity and reliability of the locomotion system and without excessively increasing the rover mass or its power requirements. At this speed level, inertial effects start to dominate the interaction with the ground <cit.>, which together with increased vibrations and impact loads may require the use of energy dissipation devices.
The RB suspension was designed for operational speeds below ∼10 cm/s. At higher speeds, the structural integrity of the suspension and the stability of the robot cannot be ensured. Attempts have been made to broaden the range of applications of RB suspensions by independently controlling the speed of the wheels <cit.> or dynamically adapting the suspension configuration <cit.>. In an effort to find alternative solutions that are better suited to a wider range of environmental conditions and speeds, actively articulated and adaptive suspension designs have been widely discussed and proposed <cit.>. While most of these solutions could be perfectly employed to maximize traversability and to minimize the detrimental effects resulting from high-speed locomotion, most rely on the optimal performance of other systems (e.g., hazard detection and terrain segmentation) or require an additional, non-negligible supply of power (e.g., to operate additional electromechanical actuators).
Fast lunar vehicles are, however, not completely new to the space exploration scene. The Russian lunokhods and NASA's two-crew-piloted Lunar Roving Vehicle (LRV) were capable of traveling at speeds that far exceeded those of present lunar and martian rovers. The lunokhods were driven at a maximum speed of 0.5 m/s, whereas the LRV was reported to have reached a top speed of ∼5 m/s while commanded by Eugene Cernan during Apollo 17 <cit.>. Despite these numbers, the capability to drive faster seemed to be closely associated with their ample power reserves and the direct human input—both vehicles were either directly piloted or teleoperated from Earth—rather than with variations purposely introduced in their suspension designs <cit.>.
The LRV inherited a suspension frequently used in conventional road vehicles with slight modifications. It consisted of an independent double-wishbone suspension with elasticity provided through transverse torsion bars in both upper and lower control arms, in addition to compliant wheel rims; and damping provided through a conventional silicone-oil damper <cit.>. The vertical stiffness of the suspension-wheel combination was 2.4 kN/m <cit.>. With regards to the performance of the suspension, Apollo 16 astronauts reported feeling “quite at home” traveling over the ridges south of the landing site toward Stone Mountain <cit.>. Despite the generally positive attitude of the astronauts, their reports also describe the tendency of the suspension to bounce uncontrollably when traveling over surfaces with a large density of small craters ( 1 m) and the impossibility to steer effectively, i.e., without excessive side slip, at speeds above 1.4 m/s. The barren and subdued lunar landscape and the need to drive at times directly toward the Sun did not make negotiating these obstacles any easier.
On the other hand, Lunokhod 1 (Luna 17) and Lunokhod 2 (Luna 21) were remotely operated rovers weighing ∼800 kg each. The lunokhods were designed with an 8-wheel suspension consisting of four carriers fixed to the bottom of the chassis <cit.>. A pair of rigid wheels with their respective swing arms were attached to each carrier. To deal with the high speeds and the heavy weight of the lunokhods, mechanical loads were dissipated through 3-beam torsion bars attached to the swing arms. No damping device was introduced in the design. Vertical stiffness varied from 8.8 kN/m of the front suspension-wheel combination to 3.5 kN/m of the middle combination. Similar issues to those experienced during the Apollo missions were found. Although these were often associated with the poor illumination conditions of the lunar surface, the limited lookahead distance and deficient feed quality of navigation cameras, and the inexperience of the operators <cit.>.
§ PRELIMINARY ANALYSIS
During the Apollo and Luna missions, data on the suspension performance were never collected. Subjective evaluations on rideability and operability are insufficient to argue in favor of one or another suspension configuration, particularly for its application to autonomous robots. We, therefore, developed a series of simulation modules to understand the relative improvement and potential limitations that the addition of passive energy dissipation devices may introduce when attempting to travel faster—up to a speed of 1 m/s—on reduced-gravity, unstructured environments compared with the performance of conventional rigid suspensions. These simulation modules were run on Coppelia Robotics' simulator CoppeliaSim <cit.> in combination with CM Labs' high-fidelity physics engine Vortex.
§.§ Multibody dynamic model
As the baseline for our comparative analysis, we defined the dynamic model of a 4-wheel, All-Wheel Drive/2-Wheel Steering (AWD/2WS) rover with three different passive suspension configurations: 1) conventional rocker arms linked by a differential, 2) independent wheel suspensions guided by shock absorbers, and 3) a novel configuration based on the combination of the previous two, i.e., independent in-wheel compliant suspensions connected at each side of the rover by a free-balancing rocker. These configurations are referred to hereafter as 1) dependent-rigid (DR), 2) independent-elastic (IE), and what we named 3) mechanically-hybrid suspension or MHS. General characteristics of the rover model are presented in Table <ref>.
The dynamic model was based on the design of ElDorado-2 <cit.>, a long-standing robotic platform previously used in the Space Robotics Lab at Tohoku University (see Fig. <ref>). Each suspension configuration presented the same mass distribution and suspension kinematics. In the case of the IE configuration, the rocker arms were locked in a horizontal position and a spring-damper system was introduced between the end of the arms and the wheel hub. Spring-damper systems were simulated by means of prismatic joints whose reaction force, F, is controlled by a proportional-derivative controller in which the proportional and derivative gains are replaced by the spring ratio, k, and damping coefficient, c, respectively (Eq. <ref>).
F = k e_i + c (e_1 - e_1-i)/Δ t,
where e_i describes the elongation of the joint at a time i, and Δ t the selected time step. Deformations within the linkages of the suspension were neglected. The defining parameters of the shock absorbers were 2 kN/m for the spring constant and 350 Ns/m for the damping coefficient with 35 mm of free-length. These are generic values intended to provide a stable movement and a limited static deflection. The optimization of suspension parameters requires the specific amplitude and frequency response of the unsprung and sprung masses against a particular set of excitations, which subsequently demands as inputs a list of mission-driven and design-specific requirements—a type of analysis for which this work was not intended.
§.§ Simulation modules
We developed a series of obstacle negotiation and gradeability modules. The environmental characteristics of each of these modules were based on features commonly found on the surface of the Moon (see Fig. <ref>).
The obstacle negotiation module consisted of three different submodules that vary based on the type of obstacle faced: step obstacles of increasing height, a dynamically-enabled 10-cm hemispherical rock, and a 1.5-m long outcrop—i.e., a partially exposed section of bedrock—with protrusions as high as 10 cm, all modeled within CoppeliaSim.
The gradeability module presented 1.5-m slopes of increasing inclination up to a maximum of 30. For the sake of maintaining the comparative analysis within reasonable margins, only situations where the robot faced the slopes at a heading angle of 0(straight climbing) were simulated. The robot initially drove on flat ground until the appropriate speed was reached.
A gravity field of 1.625 m/s^2 representative of the lunar surface was used for all our simulations. No closed-loop traction or motion control was implemented.
§.§ Wheel-soil contact model
The lunar surface is characterized by a top layer of fine-grained, slightly-cohesive regolith. A specific wheel-soil contact model would be necessary to accurately describe the complex behavior of a rigid wheel interacting with this kind of particulate material <cit.>. To the extent of our knowledge, none of the analytical models available in the literature—most of which are based on the Bekker-Reece-Wong terramechanic equations <cit.>—are capable of faithfully representing the full extent of physical phenomena taking place, much less those governing the dynamic interaction of a fast-moving, lightweight vehicle <cit.>. Numerical models were intentionally avoided due to the increased computational load these models require.
For the sake of simplicity, and in order to have a symbolic representation of the frictional behavior of lunar soil, we opted for a Coulomb friction model with an isotropic friction coefficient of 0.4 for the wheel-soil contact—frequently used as representative of metallic wheels rolling over sandy terrains—and 1.0 for the interaction with obstacles such as rocks and outcrops.
§.§ Performance evaluation parameters
The comparative evaluation of the performance was based on the success of each configuration, the maximum vertical load and pitch torque measured at both ends of the rocker arms, and the maximum vertical acceleration experienced by the chassis. Additionally, the trajectory of the robot was recorded to understand the level of longitudinal and lateral slippage future traction control systems would have to overcome.
§.§ Results and discussion
§.§.§ Obstacle negotiation performance
The heatmaps shown in Fig. <ref> represent the level of success of the different suspension configurations in overcoming perfect steps of height 1–12 cm at speeds ranging from 0.05–1 m/s. Due to their vertical profile, steps are considered the most challenging obstacle to negotiate. Green represents success, red indicates failure to drive over the step, and yellow defines situations in which the front wheel successfully overcame the step but the rear wheel was trapped or completely missed the step due to excessive lateral slippage.
Table <ref> lists the maximum values of vertical load, pitch torque, and acceleration experienced at top speed (1 m/s) in every case and for each configuration. The results obtained from the obstacle negotiation module illustrate the considerable benefit obtained from the addition of passive energy dissipation devices to the suspension design. On average among all the cases analyzed (steps, rocks, and outcrops), a 71% reduction in maximum impact load, a 37% reduction in maximum pitch torque, and a 33% reduction in maximum vertical acceleration of the chassis were observed when elasticity and damping were incorporated into the design. When compliant in-wheel suspensions were then combined with a high-range free-balancing rocker, the MHS outperformed the other two in every situation, overall reducing by 62% and 43% the detrimental effects of an irregular terrain when compared to the DR and IE configurations, respectively. The compliance of the in-wheel suspension attenuates the high-frequency/high-amplitude vibrations while the dependency of the rocker provides a more efficient weight transfer allowing the rover to overcome large obstacles without inducing excessive traction losses or instabilities (see fig:gforce).
Additional evidence of the improved stability brought by the MHS configuration is illustrated by the vertical trajectory of the robot when traversing the outcrop (see Fig. <ref>). With both the DR and IE suspensions, the robot experienced a full take-off (four wheels in the air) followed by a complete rollover, a situation that was avoided in the case of the MHS. The suspension kept the rover stable and the wheels always in contact with the jagged surface except when first impacting the edge of the outcrop when both front wheels were briefly lifted from the ground due to the sudden rebound of the in-wheel suspension; a behavior that could be mitigated with a further optimization of the suspension parameters.
§.§.§ Gradeability
Less variation in the level of success of the different configurations was observed when the rover faced 1.5-m slopes of 5–30at speeds ranging from 0.05–1 m/s (see Fig. <ref>). At higher speeds and regardless of the configuration, the top of the steepest slopes (20and 25) were often reached with just the rear wheels in contact with the ground—a predominant effect in the IE and MHS configurations due to the excessive rebound of the suspension upon first confronting the slope.
The maximum vertical loads, pitch torques, and vertical accelerations experienced by the rover when facing a slope of 20at 1 m/s are gathered in Table <ref>. We initially expected greater levels of variation in the gradeability performance of the different configurations given the evidence presented in <cit.>. In this work, the climbing ability of rocker arms evidently outperformed that of independent swing arms under every circumstance evaluated. Due to the slip-dominant nature of the vehicle-ground interaction when climbing a slope, it is possible that the absence of a more accurate representation of the wheel-soil contact behavior in our simulation modules and the lack of an active control scheme resulted in the lack of variation in the levels of success of the different suspension configurations. Nonetheless, and in line with the observations previously made, the MHS configuration successfully mitigated the negative effects of impacts and vibrations beyond what was accomplished by either the DR or IE configurations.
§ SYSTEM DESIGN
In light of the evidence provided by the results of the simulations, we conceived a new rover prototype, dubbed Explorer 1 (EX1), based on the principles of the MHS configuration (see Fig. <ref>).
EX1 was designed with a 4-wheel AWD/4WS locomotion configuration capable of achieving a maximum operational speed of 1 m/s. High-travel aluminum rocker arms are linked together and attached to the chassis through a 3-gear differential box housed inside the body frame. These rocker arms have a range of motion of about ±250 mm (2.5 times its wheel radius), only limited by the length of the wire harness of the actuator drive electronics. Attached at both ends of each rocker is a double-coil-over elastic suspension providing a lower travel range of 35 mm. The harmonic drives of the steering motors act as the connecting pieces between the rocker arms and the compliant component of the suspension. This allows the latter part of the suspension to rotate with the wheel during steering, but has the inconvenience of varying the scrub radius—the distance between the steering axis and the vertical centerline of the wheel—based on the level of compression of the suspension; a shortcoming that was assumed in favor of the modularity of the design (i.e., the design is easily adaptable to 6- and 8-wheel configurations) and due to the short free-length of the damper.
The low-travel suspension consists of upper and lower control arms passively commanded by a pair of 104-mm shock absorbers connected to the top of the wheel knuckle (see fig:lts). This arrangement maintains the camber angle nearly constant during wheel travel. Two parallel shock absorbers were used to reduce the stiffness required on the springs while providing a certain level of redundancy in the design. The shock absorbers are formed by a replaceable 2.5 kN/m spring (5 kN/m per wheel) and an adjustable damper and were selected off-the-shelf from a radio-control car manufacturer. Both the bracket and the wheel knuckle were designed with multiple mounting points so that the overall stiffness of the suspension can be slightly modified by tilting the orientation of the dampers. The stability limits of the design were verified in simulations, achieving a static longitudinal/lateral stability under lunar gravity of 30and a quick and smooth response to dynamic perturbations such as steps and cornering maneuvers (see fig:stability).
§ FIELD TEST RESULTS
To validate the locomotor performance of EX1, we conducted a field test campaign in a representative sandy field. While these tests allow us to functionally validate a new suspension design for ground testing purposes, it should be noted from the outset the inadequacy of our approach to testing for the optimization of design parameters with respect to a potential flight model configuration. The conventional approach to validating the mobility performance of planetary rovers on Earth prior to their missions suffers from a strong limitation in situations when speed plays a role. While gravity scaling is often applied to testing platforms—i.e., adapting the mass of engineering models to represent the overall weight of the flight model at destination—observable behaviors under testing are only representative when the quasi-static approximation can be applied. The moment dynamic effects dominate the behavior of the rover <cit.> and its interaction with the ground <cit.>, as is the case with our experiments, the full-body mass of the rover shall be used for a representative characterization of the performance and subsequent optimization of design parameters. This drastically affects the rover response to environmental and operational stimuli <cit.>. In these cases, gravity offloading must be applied <cit.> but further work would still be required to properly model the complex interplay of inertial, gravitational, and frictional forces taking place.
§.§ Dynamic stability
The first experiment was aimed at evaluating the contribution of the independent shock absorbers when moving at high speed over a 10-m, nearly-flat, unconsolidated ground in both transient and steady-state conditions. In this case, the rover was commanded to follow a straight trajectory divided into three phases: a) a first phase where the rover accelerates up to 1.0 m/s, b) a second phase where the rover runs at a uniform maximum speed of 1.0 m/s, and c) a final phase where the rover is decelerating to a full stop (see fig:maneuverability_field_test).
We performed these tests with the rover in two different suspension configurations: a representative DR configuration, in which the rocker is free to rotate but the shock absorbers are replaced by rigid elements locking the low-travel suspension in place; and the MHS, with the elastic elements of the suspension free to move. Six runs were conducted for each suspension configuration. Table <ref> lists the result of comparing the two configurations based on the vertical acceleration of the chassis as recorded by an IMU fixed to the top of the attachment element of the left-side rocker (see fig:ex1). To reduce the level of sensor read noise, a 4-point moving average filter was applied before extracting max. and min. values, while the mean of the standard deviation of the vertical accelerations was computed from the original, unfiltered data across all six runs.
Results confirm an overall reduction of the vertical accelerations experienced by the chassis when the MHS is used. This is particularly significant during the acceleration phase where the rover experienced greater vibrations due to an observed increase in wheel slippage. This is aligned with a well-established understanding of the performance of elastic suspensions in off-road terrestrial vehicles but it was important to evaluate the potential interference that in-wheel shock absorbers could have on the movement of the free-balancing rockers, less common in terrestrial applications.
§.§ Obstacle negotiation capability
In this second experiment, the rover was commanded to drive its left-side wheels over a 10-cm rock (see fig:obstacle_negotiation_field_test). We now wanted to observe the potential differences in performance when dependency was introduced into the design of a conventional independent suspension configuration. We compared the obstacle negotiation capability between an IE configuration (i.e., rocker rotation locked) and the MHS. Tests were performed at three different speeds (0.2, 0.5, and 1.0 m/s) and each test was conducted three times for each suspension configuration and speed. The magnitude of the force applied to the front wheels was recorded by an in-wheel force/torque sensor and vertical accelerations were measured by the same IMU as in the dynamic stability tests.
Table <ref> also gathers all the IMU measurements recorded during the obstacle negotiation tests. Both the IE and MHS configurations successfully overcome the obstacle at 0.2 and 0.5 m/s but it was only with the MHS that the rover was capable of seamlessly negotiating the rock at 1.0 m/s. At this speed, the observable impact on the IE configuration was such that no successful runs were ultimately conducted with this configuration due to concerns over safety and the structural integrity of the rover. fig:force_norm_comparison_in_obstacle_negotiation display the average of the norm of the force vector acting on the front wheels when overcoming the rock for both the IE and MHS configurations. Dampening of impact loads on the front-left wheel (both mean and maximum force) was also greater in the case of the MHS and the degree of dampening increased with speed, reducing the loads by 24% on average across wheels and speeds. The main benefit associated with the addition of dependency is the increased pressure exerted on the wheels not overcoming the obstacle, enabling the right side wheels to greater traction levels and drastically improving obstacle-surmounting capabilities of the rover (see fig:force_norm_comparison_in_obstacle_negotiationfig:right_wheel_force).
§ CONCLUSION
The increased autonomy demanded by upcoming missions to the Moon and Mars implies planetary robots have to be capable of coping with a wide range of disturbances. The addition of compliant elements to the suspension system of these robots appeared to be vital in counteracting the detrimental effects of impact loads and vibrations when driving at high velocities (≥ 1 m/s) under weaker gravity fields. But even when these elements are included, the specific configuration of the suspension design plays an important role in the rover ultimate performance. A new passive suspension configuration, so-called mechanically-hybrid suspension (MHS), was proposed and compared with more traditional rocker and independent swing arm suspensions. The MHS combines the functional benefits of both dependent and elastic elements. Simulation results under a lunar-like gravity field confirmed our initial hypothesis. Field test results validated that an MHS configuration could greatly improve stability while successfully isolating the chassis of the rover from unwanted vibrations and impact loads beyond what could be accomplished with either of the other two commonly used passive configurations. An improved suspension design also affects other aspects involved in navigation, lowering the demand on perception and control systems, increasing duty cycles, and enabling higher levels of autonomy. Future work will explore additional improvements and variations in the suspension configuration such as combining the MHS with non-pneumatic, flexible wheels. Their combination could bring about higher levels of stability and terrain compliance while further reducing non-vertical impact loads and vibrations.
§ ACKNOWLEDGMENT
The authors would like to thank Alan Allart, Tristan Lecocq, Kazuki Nakagoshi, Ryusuke Wada, Danishi Ai, and Merlijn Siffels for their invaluable help and support in the development of EX1.
|
http://arxiv.org/abs/2307.04303v1 | 20230710014413 | Learning to Generate Equitable Text in Dialogue from Biased Training Data | [
"Anthony Sicilia",
"Malihe Alikhani"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
The category of reduced imaginary Verma modules
Juan Camilo Arias, Vyacheslav Futorny and André de Oliveira
August 12, 2023
===============================================================
The ingrained principles of fairness in a dialogue system's decision-making process and generated responses are crucial for user engagement, satisfaction, and task achievement. Absence of equitable and inclusive principles can hinder the formation of common ground, which in turn negatively impacts the overall performance of the system.
For example, misusing pronouns in a user interaction may cause ambiguity about the intended subject. Yet, there is no comprehensive study of equitable text generation in dialogue. Aptly, in this work, we use theories of computational learning to study this problem. We provide formal definitions of equity in text generation, and further, prove formal connections between learning human-likeness and learning equity: algorithms for improving equity ultimately reduce to algorithms for improving human-likeness (on augmented data). With this insight, we also formulate reasonable conditions under which text generation algorithms can learn to generate equitable text without any modifications to the biased training data on which they learn. To exemplify our theory in practice, we look at a group of algorithms for the GuessWhat?! visual dialogue game and, using this example, test our theory empirically. Our theory accurately predicts relative-performance of multiple algorithms in generating equitable text as measured by both human and automated evaluation.
§ INTRODUCTION
Machine learning models for text generation in dialogue have trouble learning the “long tail” of a data distribution; i.e., the data concepts not frequently observed during training. For example, dataset biases like gender imbalance can induce a long tail in training data whereby important data relationships involving gender are underrepresented, like women in sports <cit.>. When training, generative models often fail to learn these concepts in the long tail, and ultimately, learn inequitable, stereotyping behaviors instead (see Figure <ref>). These non-inclusive behaviors not only decrease user-satisfaction by isolating users <cit.>, but also impede common ground, hindering the task-success of the dialogue system.
Despite the multi-faceted impact of inequitable text generation in dialogue, we do not have a comprehensive and theoretically grounded framework for understanding how machines learn to generate inequitable text and when this outcome can be avoided. To provide a strong technical foundation for equitable generation in dialogue, we build on theories of computational learning <cit.>.
Specifically, our theoretical contributions are as follows:
* We define precise constraints that encapsulate diverse notions of equity in dialogue (Def. <ref>).
* We rigorously compare our proposals to traditional notions of equity in classification ( <ref>).
* We show computational learning theory models equitable learning well: algorithms from learning theory are easily adapted to learn equitable dialogue by augmenting data (Thm. <ref>).
* We prove algorithms based on learning theory can even learn to generate equitable text from some types of biased training data (Thm. <ref>).
Loosely, Thm. <ref> is based on the idea that, when provided sufficient background, human text is not biased because it is typically context-aware (Def. <ref>). For example, when the subject is a female scientist, a human will likely not use male pronouns in subject-referring conversation because humans tend to correctly employ dialogue context to inform their language use.
Instead, in many real-world datasets, bias is an aggregate property, arising from inequality of the proportions of protected attributes such as race or gender; e.g., more conversations about male than female doctors.
The theoretical understanding we contribute is imperative because it informs algorithm design. In particular, using our theory, we can predict:
* the most equitable algorithms for unseen data;
* counter-intuitive properties of algorithms that lead to less equitable results.
For example, consider algorithms which naïvely augment data to remove bias <cit.>. Through theoretical study, we identify cases where this practice can actually hurt an algorithm's chances at learning to be equitable. In fact, our experiments in <ref> confirm this.
The remainder of the paper is organized as follows: <ref> provides background to position our contributions including discussion of related work, a brief tutorial on the employed learning theoretic framework, and a few running examples used throughout the text; <ref> provides our theoretical contributions including formulation of mathematical notions of equity in text generation and theoretical analysis of learning algorithms; <ref> conducts experiments which validate our theory in practice; and finally, <ref> concludes the work. Code, data, and a python package will be made publicly available to promote further research.[https://github.com/anthonysicilia/equitable-dialogue-ACL2023 https://github.com/anthonysicilia/equitable-dialogue-ACL2023]
§ BACKGROUND AND RELATED WORK
§.§ Learning Theory for Dialogue
Recent proposals for the use of learning theory in dialogue are due to <cit.> who propose .[rning eory for Text-Genation] Specifically, is a formal framework for studying the diverse objectives present when learning to generate text. Ultimately, their proposal is grounded in a general evaluation metric – the test divergence. Intuitively, test divergence mimics practical evaluation, in which we conduct tests to evaluate the generated dialouge:
𝐓𝐃_𝔾(θ) = 𝐄[| h(D, U) - h(D̂, U) |]
where (C, D) ∼𝔾, D̂∼ℙ_θ(C), U ∼𝕌.
Of course, there are a number of undefined terms here: specifically, the test h, the context C, the goal dialogue D, the learned dialogue D̂, and the unobserved effects U. Below, we explain each, using examples from Figure <ref> to assist our exposition.
Goal Distribution The goal distribution 𝔾 is a joint probability distribution over dialogue contexts c ∈𝒞 and dialogues d ∈𝒟. For <cit.>, the goal is to generate human-like text. So, as in the visual dialogue example in Figure <ref>, the context might be an image/goal-object and the goal dialogue might be sampled from a (human) corpus of QA pairs with this context.
Learned Dialogue Distribution The learned dialogue distribution is the probability kernel ℙ_θ(C) that provides a distribution over dialogues, conditional to the parameters θ learned by the machine (e.g., neural parameters) as well as the random dialogue context C. The precise manner in which dialogue occurs will vary from system to system, but typically involves a machine generating/prompting responses to/from human users as in Figure <ref>. This interaction implicitly defines the random process through which a set of parameters θ and a random context C produce a predicted dialogue D̂. Importantly, the learning machine may not control every aspect of the process – e.g., the human responses. Aptly, we encapsulate this unknown randomness by the distribution ℙ_θ(C). In some cases, we will consider the joint distribution of both (goal) contexts and learned dialogues; i.e., of the random tuple (C, D̂). We write 𝔾̂_θ for this joint distribution.
Test Function with Unknown Effects The final component is the test function (or simply test) h. The test takes as its primary input a dialogue and returns a value in the interval [0,1]. Conceptually, a test can represent any evaluation process in which we are interested. For example, some tests commonly employed in practice include n-gram overlap metrics such as BLEU <cit.>, sentiment scores from a pre-trained classifier, or even a score attained through human evaluation. The unknown effect U ∼𝕌 represents any additional information needed to completely determine the outcome of the test. When the test is BLEU, U simply takes the form of a reference dialogue to which the input dialogue is compared. For human evaluation, U encapsulates all of the unknown variables that contribute to the randomness of a real-world experiment. Often, U may not be needed.
Interpretation With terms defined, it is easy to see the test divergence is a direct comparison of the output of the test from the goal dialogue D to the predicted dialogue D̂, learned by our dialogue system. Larger test divergence indicates the learned dialogue fails to replicate the goal dialogue along the dimensions targeted by the test. For example, if the goal is human-likeness in the visual dialogue example from Figure <ref>, a test might target question strategies <cit.>.
Small test divergence in these cases indicates the learned dialogue uses similar strategies as the (human) goal.
§.§ Related Works on Equity
In natural language, popular, early studies of equity begin with avoiding stereotyping in learned model representations <cit.>. This approach has continued to inspire many de-biasing techniques for learned representations <cit.> and evaluation techniques for the equity of representations <cit.>. De-biasing and evaluation techniques for model representations have also been adapted for text-generation tasks <cit.>.
Still, these model-intrinsic approaches to resolving inequity have proven subpar compared to model-extrinsic approaches, which focus directly on the downstream task <cit.>. For this reason, our approach tackles the problem of equitable dialogue generation from an extrinsic point-of-view. Previously, in text-generation, extrinsic points-of-view have typically used change in scoring functions (e.g., for sentiment, gender-polarity, etc.) to measure equity <cit.>. Our work is in line with these, but provides formal theoretical study, and further, focuses more specifically on dialogue. Formal theoretical study is vital to understanding equity, because imprecision in problem assumptions and objectives has already proven to be a pitfall in existing works on equity <cit.>. For example, in classification, detailed theoretical study reveals a complex relationship of trade-offs between accuracy and (some) notions of equity <cit.>, contributing to algorithmic advances <cit.>. Our work continues this trajectory, offering valuable practical insights, which are sometimes unintuitive, to achieve equity in machine dialogue.
Finally, it is worthwhile to note that <cit.> also contribute a formal, theoretical definition of fairness in dialogue. Our work contributes a more general definition of equity – i.e., which supports arbitrary types of dialogue context and more general types of dataset bias. As noted, we also make connections with learning theory to provide key insights on algorithm and dataset design. Indeed, ours is the first work to study bias in text generation using these insightful techniques from computational learning theory.
§ FORMALIZING EQUITY IN DIALOGUE
§.§ Formal Definitions for Equity
In this part, we introduce some formal, mathematical notions of equity. We start with a general notion of equity in dialogue and show how this can be specialized to compare with ideas of equity in the classification literature. For proofs, see Appendix <ref>.
Protected Attributes To begin, we need to first define the notion of a protected attribute. Conceptually, this is the sensitive variable (e.g., race, gender, religion, etc.) that we intend to “protect” by the equity constraint. Otherwise, presumably, system inequities would disproportionately, negatively impact the sub-population captured by the attribute. Throughout this work, we use a variable a ∈𝒜 = {0,1} to denote the protected attribute and we measure equity of the text with respect to this variable. Precisely, a=1 implies the dialogue context exhibits the attribute (e.g., female gender, Black race, Muslim religion), while a=0 implies the context does not exhibit the protected attribute. For example, in the educational dialogue from Figure <ref>, the context is a discussion topic and the protected attribute is female gender. Since the topic is a female scientist, it exhibits the protected attribute and we would have a=1. If the topic was “Science” more generally, it would not exhibit the protected attribute and it would be appropriate to set a=0. In general, we expect the protected attribute to vary randomly with the dialogue context C. To model this in a general way, we assume the attribute is sampled from a probability distribution which is dependent on the random context: A ∼𝔸(C).
For example, in the visual dialogue from Figure <ref>, the protected attribute A is female gender, which is non-deterministically dependent on the visual features of the image C. In other cases, like the educational example, the protected attribute may be completely determined by context. 𝔸 can model this as well – e.g., as a point mass.
Equity as Score Parity
Commonly, equity in machine learning systems is formally defined through a notion of parity <cit.>. In dialogue, we can express parity as the following requirement:
The system uses language in the same way, regardless of protected attribute.
This intuitive notion of equity is vague in its use of “way” to be general, allowing for specification to different applications. For example, <cit.> both consider the toxicity and sentiment of language as the pertinent “way” in which language is used, when measuring equity. A classifier is used to estimate the toxicity or sentiment of the used language, and equity occurs if this classifier's outputs are invariant of the protected attribute. For example, if the protected attribute is Muslim religion, the dialogue should be no more “toxic” when its context is specific to Muslims, than when its context is not specific to Muslims. Below, we formalize this intuition for equity with a mathematical constraint.
(Score Parity) A contextualized dialogue distribution[Frequently, we use contextualized dialogue distribution to refer to any joint distribution over contexts and dialogues.] 𝔾 with (C,D) ∼𝔾 and A ∼𝔸(C) satisfies score parity if
𝐄[s(D, 0) | A = 0] = 𝐄[s(D, 1) | A = 1]
where s is a scoring function s : 𝒟×𝒜→ [0,1].
To arrive at our motivating example <cit.>, one simply chooses the scoring function s to be a toxicity classifier or a sentiment classifier. The expected output of this classifier should be the same, regardless of the protected attribute's setting. In general, if equality does not hold in the above definition of parity, we follow <cit.> using Δ to denote the gap across attributes:
Δ(𝔾) = |𝐄[s(D,0)| A=0]
- 𝐄[s(D,1)| A=1] |.
This lets us talk about degrees of inequity, and therefore, measure progress towards our ideals.
Multi-Category Score Parity
Notice, we use the presence/absence of singular demographic groups (e.g., female v. not female) instead of binary comparisons (e.g., female v. male) in defining the protected attribute. This choice allows our definition of equity (above) and later theory to support study of general multi-category attributes with more than two attributes like race (e.g., Black, White, Asian) or religion (e.g., Muslim, Jewish, Catholic).
Using race as an example, we can measure the parity gap when Black is the protected attribute, White is the protected attribute, Asian is the protected attribute, etc. The dataset is then equitable for all races (according to score parity) if all measured parity gaps are 0. In this way, our definition and subsequent results can generalize to the multi-category case. We use this strategy, for example, in Section <ref>.
Comparison to Demographic Parity
In classification, demographic parity is a commonly studied notion of equity <cit.>, which stipulates that a classifier's outputs should be independent of the protected attribute. For a classifier c, mapping random features X to a {0,1}-valued label, this can be written:
𝐄[c(X) | A = 0] = 𝐄[c(X) | A = 1].
For score parity, when s(·, 0) = s(·, 1), the scoring function s does not depend on the attribute and we see that score parity is a direct reflection of demographic parity. Whereas classification problems use machine learning to select the classifier c in a fair way, dialogue uses machine learning to select the feature distribution X (i.e., D in our definition).
Comparison to Accuracy Parity
Depending on the application, it is known that demographic parity can also be an inappropriate constraint; e.g., if the classifier c is meant to predict the protected attribute itself <cit.>. This precise situation is inherent to dialogue, since some aspects of language are compulsorily predictive of the protected attribute (e.g., gendered pronouns or religious terminology). Fundamentally, there is a trade off between the accuracy of the language used and the desired invariance. In these cases, <cit.> suggest accuracy parity as an alternative, which requires equal error rates, regardless of protected attribute. For Y the true label to X and c as in Eq. (<ref>), this can be written:
𝐏𝐫(c(X)≠ Y | A = 0) = 𝐏𝐫(c(X)≠ Y | A = 1).
By our definition, score parity can be used to reflect this distinct notion from classification as well. Conceptually, we select our scoring function to measure the correctness of the dialogue. Then, just like accuracy parity, score parity enforces equal error rates, regardless of protected attribute. While details may vary based on application, we consider selecting the scoring function in the examples from Figure <ref>. We first define an identifier function v : 𝒟→{0,1} which indicates whether a dialogue d ∈𝒟 verbalizes the protected attribute. For example, we can imagine v scans for female gendered words {she, her, girl, ...}. Then, our system makes an “error” if it fails to verbalize the protected attribute or inappropriately verbalizes the attribute. So, we select the scoring function to reflect this:
s(D, A) = | A - v(D) |.
With the choice of scoring function above, score parity reflects the intuition of accuracy parity by requiring that the correctness of the language use (in referring to a protected attribute) is independent of the protected attribute.
As alluded, this constraint can be especially useful in case spurious correlations (i.e., stereotypes) between protected attributes and context cause different error rates with/without a protected attribute. This is the case in our toy examples (Figure <ref>) as well as some real-world generation tasks <cit.>.
Takeaways The formalization of equity we introduce – score parity – is both general and useful. It models existing ideas for empirical evaluation of equity in text-generation <cit.> and can also be used to model disparate notions of equity from existing classification theories <cit.>. Ultimately, the choice of the scoring function s determines the “way” in which the language should be invariant to the protected attribute, and subsequently, dictates the motivating goals of the equity constraint.
§.§ Evaluating Equity with Learning Theory
Next, we show how learning to generate equitable text can be modeled with learning theory.
Test Divergence (Reprise)
To evaluate equity with , the objective in Eq. (<ref>) remains largely unchanged. Primarily, we explicitly incorporate the protected attribute:[Equivalently, one can group A with the unknown effects and keep Eq. (<ref>). The rewrite only makes assumptions explicit.]
𝐓𝐃_𝔾(θ) = 𝐄[| h(D, A, U) - h(D̂, A, U) |] where
(C, D) ∼𝔾, D̂∼ℙ_θ(C), A ∼𝔸(C), U ∼𝕌.
Importantly, we must consider the deviations from <cit.> not present in Eq. (<ref>): (1) the choice of goal distribution 𝔾 and (2) the choice of test h. Originally, focus on evaluation of human-like dialogue, and therefore, propose the goal to be defined by any collected corpus of contextualized human dialogues. Instead, we are interested in the equity of the contextualized dialogue and cannot blindly use human dialogue as an example; i.e., we cannot take for granted that the contextualized human dialogue is equitable. Thus, to appropriately evaluate equity, we generally assume the following constraints on the goal distribution and test.
Equitable Goals and Tests
(Balanced) A contextualized dialogue distribution 𝔾 is balanced if it assigns equal (marginal) likelihood to the protected attribute:
𝐏𝐫(A = 1) = 𝐏𝐫(A = 0); (C,·) ∼𝔾, A ∼𝔸(C).
(Equitable Goal) We say a contextualized dialogue distribution 𝔾 with (C,D) ∼𝔾 is an equitable goal distribution if it is balanced and satisfies score parity (for some fixed score s).
So, intuitively, we propose the goal in equitable dialogue is a contextualized dialogue distribution which is itself equitable, according to our formal definition of this property – i.e., score parity. Furthermore, it should be balanced to prioritize the protected attribute equally during evaluation.
As we'll see later, choosing the test h to be the scoring function s from our previous definition allows us to use 𝐓𝐃 (with an equitable goal) to control the parity gap of our learned dialogue.
Biased Data While the formal definition above (Def. <ref>) is about equity, it should also be noted that we implicitly arrive at a formal definition for bias: the absence of equity. In particular, a contextualized dialogue distribution (dataset) is biased if it is not equitable. Note, this also distinguishes biased data from other common concepts like noisy data because we use an expectation to quantify parity; i.e., which is immune to non-systemic noise.
Small Test Divergence Implies Equity
Consider an equitable goal 𝔾 and let h ≡ s (the scoring function). Then, Δ(𝔾̂_θ) ≤ϵ whenever 𝐓𝐃_𝔾(θ) ≤ϵ / 2.
Simply, the above result indicates minimization of 𝐓𝐃 with an equitable goal and appropriate test leads to an equitable learned dialogue distribution.
Takeaways An important consequence of Thm. <ref>
is the ability to confidently use algorithms designed in the framework (i.e., to reduce test divergence) for equitable dialogue learning. While these algorithms may have originally been designed to learn human-like dialogue, they can easily be modified to learn equitable dialogue. In particular, we need only change the goal from any human dialogue distribution to any equitable dialogue distribution – as in Def. <ref>. Portability of algorithms in the sense described means, ultimately, a unified theory for dialogue generation. For any algorithm we propose, we may conduct a singular theoretical analysis of test divergence that can serve multiple purposes – both human-like and equitable dialogue generation. In other words:
-based algorithms for human-likeness can be used to learn equitable text by simply augmenting training data.
Some standard examples of how to create the new equitable goal 𝔾 include augmenting data in the dataset to achieve equitable constraints <cit.>. The takeaway from our theorem above agrees with existing empirical study: we can typically expect these strategies to be effective. Still, as we see next, there are other effective alternatives (under the right assumptions).
§.§ Learning to be Equitable and Human-like
Next, we study the circumstances under which the goals of human-like dialogue learning and equitable dialogue learning align. That is, we study circumstances under which an algorithm designed to minimize 𝐓𝐃 can learn from (biased) human-like goal data and simultaneously learn to be equitable.
Context and Its Role (Assumptions)
(Context-Awareness) Consider an equitable goal distribution 𝔾. A contextualized dialogue distribution ℍ≠𝔾 is context-aware if [We use the shorthand 𝐏𝐫(C| D) = 𝐏𝐫(C̃|D̃) to mean: 𝐏𝐫(C = c| D = d) = 𝐏𝐫(C̃ = c|D̃ = d) ∀ (c,d) ∈𝒞×𝒟.]
𝐏𝐫(D | C) = 𝐏𝐫(D̃|C̃); (C̃,D̃) ∼ℍ, Ã∼𝔸(C̃).
(Context-Preservation) The distribution ℍ preserves context if
𝐏𝐫(C | A) = 𝐏𝐫(C̃|Ã); (C̃,D̃) ∼ℍ, Ã∼𝔸(C̃).
The definitions are based on the idea of label-shift used to study data-shift at test time <cit.>. In this paper, we think of ℍ as the possibly inequitable distribution of human contextualized dialogues (determined by some corpus). So, these definitions can be viewed as assumptions of how inequity presents itself in human data.
Context-awareness assumes that humans are not biased provided the background context C. Conceptually, this is reasonable, since humans use context to form inferences about attributes of other human subjects (even protected attributes). If background is sufficient, human inferences will often be correct inferences and the dialogue should be equitable with respect to accuracy parity, at least.[Perfectly correct dialogue satisfies accuracy parity because it satisfies s ≡ 0 in Eq. (<ref>), regardless of A.] Instead, bias in the considered corpus must arise from aggregate disproportions of attributes (see <ref>).
Context-preservation assumes that the presentation of the context for attributes does not change. In other words, the features of the protected attribute which present themselves through the context should be invariant across 𝔾 and ℍ. For example, if one attempts to infer race from an image, this assumption simply states the visual features indicative of race should be consistent. The assumption would be violated, for example, if 𝔾 protects Asian males and ℍ protects Asian females.
Test Divergence Learning Bound
In this part, for simplicity, we assume the parameters θ are learned from a finite space Θ. Other proof techniques may allow arbitrary Θ; e.g., <cit.>.
Consider an equitable goal 𝔾 with associated test h. Suppose a sample of i.i.d. human data is collected 𝕊 = (C̃_i,D̃_i)_i=1^m; (C̃_i, D̃_i) ∼ℍ. Suppose ℍ is context aware and preserves context. Then, for all δ > 0, with probability at least 1-δ, for all θ, 2β×𝐓𝐃_𝔾(θ) is bounded above by
1/m∑_i=1^m |h(D̃_i, Ã_i)_human - h(D̂'_i, Ã_i)_predicted| + √(log|Θ| + ln 2 / δ2m)_data efficiency
where β = min_a 𝐏𝐫(Ã = a).[Note, we also pose a technical requirement: pairwise independence must hold (conditional to the context) between the human dialogue, the predicted dialogue, and the protected attribute. This is not an overly strong assumption; see Appendix <ref> for a detailed discussion with examples.]
For interpretation, we break down the upperbound on 2β×𝐓𝐃_𝔾(θ) into two terms: (a) the difference in test output from the human dialogue to the predicted dialogue and (b) a data efficiency term dependent on the number of i.i.d samples m.
Equity from Biased Data
Notice, the predicted dialogue in (a) is dependent on the human dialogue's context C̃_i – not the goal dialogue's context C – so (a) is actually identical in definition to 𝐓𝐃_𝕊, an empirical observation of 𝐓𝐃_ℍ. That is, (a) is test divergence computed on a human corpus as was done by <cit.>. Since (a) uses a human dialogue corpus to define its goal, Eq. (<ref>) implies that learning human-like dialogue (via ) can also optimize the equity of the dialogue by reducing an upperbound on the equitable goal 𝐓𝐃_𝔾. This is true even if the goal human data is biased. In other words:
-based algorithms learn human-likeness and equity, even on biased data.
We only require the human data to be context-aware and preserve context (Defs. <ref> and <ref>).
Data Efficiency The above interpretation of (a) is only valid if the data efficiency term (b) is also small. For interpretation, we consider the size of the parameter space Θ fixed and focus on the number of i.i.d training samples m. As m increases, (b) ultimately goes to 0 and the effect of (a) dominates the bound. In some cases though, if m is too small (b) can also have an impact. For example, this may be the case when using data-augmentation strategies to create a more equitable distribution. In particular, augmentation reduces the number of i.i.d. data points by creating dependencies in the data, which can reduce the data-efficiency of learning algorithms <cit.>. That is, augmentation can increase the size of (b) in learning bounds on test divergence,[For discussion, see the pf. of Thm. <ref> and remarks.] or in other words:
Augmenting training data to improve equity can reduce data-efficiency, and ultimately, model performance.
Impact does depend on the augmentation strategy, so we study common proposals for equity, next.
§ EXPERIMENTS
In Section <ref>, we conclude by outlining algorithmic insights revealed by our theory. Next, we test these theories on the GuessWhat?! game corpus.
§.§ Dataset, Algorithms, and Evaluation
Unless otherwise noted, we use identical experimental settings, hyperparameters, etc. as <cit.>.
Dataset
Our dataset is the corpus for the GuessWhat?! game proposed by <cit.>. Gameplay is described in Figure <ref> and an example is shown as the visual dialogue in Figure <ref>. We also give a detailed description of the game rules in Appendix <ref>. We use the original train/val. splits and provide statistics on this corpus in Appendix <ref>. For training, unless otherwise noted, we use the full train set and report 1 seed. We focus on modelling the question-player and use an automated answer-player trained on human data.
Protected Attribute For these experiments, we use gender (male and female) as the protected attribute. When the protected attribute is female gender (𝐅), we set a=1 as long as all human dialogues use at least one female-gendered word.[{she, woman, her, hers, gal, girl, women, gals, girls}] When the protected attribute is male gender (𝐌), we set a=1 as long as all human dialogues use at least one male-gendered word.[{he, man, him, his, guy, boy, men, guys, boys}] Conceptually, this labeling scheme uses human annotator consensus to determine when it is appropriate or inappropriate to ask gender-specific questions: if a=1, all human annotators perceive the protected gender to be present in the image and relevant to gameplay. Importantly, the labeling scheme also implies that the human dialogue satisfies our assumptions in <ref>: context awareness (Def. <ref>) and context preservation (Def. <ref>); i.e., as shown in Appendix <ref>. Different conceptualizations of how the protected attribute should be defined are possible, but we focus on this scheme because it allows us to simulate the assumptions of our theory in <ref>, and therefore, best test our theory in practice. As a final note, while we focus on male/female gender in these experiments, using more than two categories for protected attributes is also possible. Simply, one checks the parity gap for each new protected attribute to be added. This would allow our theoretical and empirical study to be extended to general multi-category attributes; e.g., race or religion.
Algorithm is a cooperative learning algorithm proposed by <cit.> to model the question-player. The algorithm is based primarily on a self-play learning phase <cit.> which learns from machine-machine dialogue. This is used in addition to (after) a more traditional supervised learning phase (i.e., on human-human dialogue). See Appendix <ref> for details.
Algorithm An extension of proposed by <cit.> with the purpose of better optimizing test divergence during the self-play learning process. Through some theoretical analyses, ultimately, the authors propose to regularize the self-play phase by re-incorporating human-human data from the supervised phase.
Algorithm A modification of the algorithm. While re-incorporating human data, an augmentation (downsampling) strategy is used to balance occurrence of protected attributes; i.e., like other strategies for equity <cit.>. See Appendix <ref> for details.
Human-Likeness Evaluation To evaluate human likeness, we use metrics proposed by <cit.>: average accuracy 𝐚𝐜𝐜 in identifying the true goal-object across three random seeds, average lexical diversity (𝐥𝐝𝐢𝐯; type/token ratio over all dialogues), average question diversity (𝐪 𝐝𝐢𝐯; % unique questions over all dialogues), and average percent of dialogues with repeated questions (𝐫𝐞𝐩 𝐪). We report these on the full test data.
Equity Evaluation To evaluate equity, we focus on accuracy parity; i.e., score parity with scoring function described in Eq. (<ref>).[We focus on accuracy parity because the dataset we consider is not likely to exhibit any significant parity issues in toxicity, sentiment, etc. Instead, the systemic biases in the data are most likely to impact accuracy parity.] To replicate evaluation against the goal distribution in Def. <ref>, we apply an augmentation strategy to the test set (similar to the algorithm; see Appendix <ref>). Because our ground truth data is inferred from human annotators focused on game success, we also incorporate additional human annotations. 𝐡𝐮𝐦.𝐞𝐯𝐚𝐥. is % of model dialogues using gendered words correctly based on annotation (50 per method per annotator). Namely, two annotators[College educated, native English speakers.] were asked to determine correctness of gendered word use, evaluating both incorrect usage as well as false negatives; i.e., where use would be appropriate/helpful.[To prime responses, annotators were prompted with questions like “If any gendered words were used, were they used correctly?” as well as “If a gendered word was not used, would it have been helpful to use one to complete the task?”.]
§.§ Results
produces human-like, equitable text. In Tab. <ref>, improves upon in terms of both human-likeness and equity, across all metrics. These observations validate our theoretical analyses. In particular, (as the name implies) is designed based on the framework to minimize test divergence. From previous work, we know this means it should improve human-likeness <cit.>. Now, from our current theoretical study (Thm. <ref>), we also hypothesize can improve equity as long as certain assumptions are met (Def. <ref>, <ref>). Since the dataset we study satisfies the specified assumptions, our theoretical expectation of is the multi-faceted improvement we observe. That is, our theory predicts the empirical improvements in human-likeness and equity achieved by . The ability of our theory to predict the impact of algorithm design choices is an important practical implication. We are also able to draw similar conclusions for , which we discuss next.
does not improve equity as well as , but overall, its behavior aligns with our theoretical predictions.
Thm. <ref> also makes the observation that data-augmentation strategies like can sometimes perform worse than alternatives which focus only on human-likeness (i.e., due to data-inefficiency). Since does augment data significantly, we might expect to perform worse than , and ultimately, it does in Tab. <ref> (all metrics but Δ 𝐌). With that said, another of our theoretical results (Thm. <ref>) suggests data-augmented versions of algorithms like can, in fact, improve equity, especially in more general cases where data does not satisfy the circumstances of our experimental data. In experiments, this insight is reflected in comparing and the baseline. outperforms in Tab. <ref> on all metrics but 𝐓𝐃 𝐅.
Test divergence models equity well. Finally, we recall test divergence is the key link between existing learning theoretic work and our analysis of equitable dialogue. In particular, we show, theoretically speaking, that 2𝐓𝐃 always bounds the parity gap Δ, which measures equity. As a result, learning theory algorithms can implicitly learn to be fair in many cases. Indeed, empirical results in Tab. <ref> agree with this theoretical bound in every case, and further, suggest 𝐓𝐃 may be useful at ranking equity of algorithms, since 𝐓𝐃 is predictive of all improvements from to . Again, our theoretical predictions match our empirical observations, highlighting the practical utilitiy of our theory.
§ CONCLUSIONS
In this paper, we provide a first in-depth study of equity in dialogue, formalizing mathematical notions of equity in dialogue and using computational learning theory to study how equity can be achieved through algorithm design.
Our empirical results show how our formal theoretical study of equity in dialogue can be used, with great benefit, to select and design algorithms in a task-oriented dialogue setting. In particular, we can: design algorithms that achieve both equity and human-likeness, predict unexpected consequences of data-augmentation, and provide proxy statistics that are useful in ranking the equity of algorithms. To promote further research, our code, data, and a python package will be made publicly available.[https://github.com/anthonysicilia/equitable-dialogue-ACL2023 https://github.com/anthonysicilia/equitable-dialogue-ACL2023]
§ ACKNOWLEDGEMENTS
The authors thank Amazon for their support during this project.
§ LIMITATIONS
While our theoretical work is broadly applicable to any protected attribute and any dialogue task, our empirical study has primarily tested gender bias on the GuessWhat?! task. Continued experimental study on a wider range of protected attributes and tasks can better support our mathematical findings. Also, users of our theory should verify the assumptions of our theory when using it to draw insights on new datasets. Specifically, as the type of data bias changes, it is possible the assumptions of Thm. <ref> may no longer be met. Users of our theory should take care in ensuring context-awareness and context-preservation, for example, are reasonable assumptions on new data, prior to applying the insights of <ref>. Lastly, while all of our gender annotations come from human annotators, only a smaller subset come from annotators primed to judge correctness/equity of gender reference. So, more in-depth human evaluation can better support our theoretical results as well.
§ ETHICS STATEMENT
The goal of this paper is to present a theoretically grounded framework to mitigate bias in dialogue systems. Our theoretical and empirical techniques can lead to important insights/solutions for algorithm design that reduce bias, along with any unintended harm associated with this bias. With this said, some of the proposed algorithms rely on pretrained models such as word or image embeddings, and any harm or bias associated with these models can still be present after efforts to mitigate. Thus, models trained with these techniques should still undergo rigorous human evaluation for presence of biases before being deployed.
Our human subject board approved our protocol. Human subjects participated voluntarily and were compensated according to the regulations approved by our human subject review board.
acl_natbib
§ PROOFS AND ADDITIONAL TECHNICAL DISCUSSION
§.§ Proof of Thm. <ref>
Consider an equitable goal 𝔾 and let h ≡ s (the scoring function). Then, Δ(𝔾̂_θ) ≤ϵ whenever 𝐓𝐃_𝔾(θ) ≤ϵ / 2.
Suppose 𝐓𝐃_𝔾(θ) ≤ϵ, then we have
ϵ ≥𝐄[ | s(D, A) - s(D̂, A)|]
= ∑_a ∈𝒜𝐏𝐫(A=a) ·𝐄[ | s(D, A) - s(D̂, A) || A=a] (Law of Total Expectation)
= 1/2∑_a ∈𝒜𝐄[ | s(D, A) - s(D̂, A) || A=a] (Balance of 𝔾)
≥1/2∑_a ∈𝒜|𝐄[ s(D, A) - s(D̂, A) | A=a] | (Jensen's Inequality)
Now, since 𝔾 is equitable we have there is some value x such that for all a ∈𝒜, we have 𝐄[s(D, A) | A=a] = x. Substituting and expanding the sum over 𝒜, we have
∑_a ∈𝒜|𝐄[ s(D, A) - s(D̂, A) | A=a] | = | x - 𝐄[s(D̂, 0)] | + | x - 𝐄[s(D̂, 1)] |.
Next, we put together the previous two equations and utilize the definition of the absolute value to break the proof into cases. For ease of presentation, we let
μ = min{𝐄[s(D̂, 0)], 𝐄[s(D̂, 1)] } and M = max{𝐄[s(D̂, 0)], 𝐄[s(D̂, 1)]}.
This gives
2ϵ≥𝐄[s(D̂, 0)] - x + 𝐄[s(D̂, 1)] - x if μ≥ x,
x - 𝐄[s(D̂, 0)] + x - 𝐄[s(D̂, 0)] if M ≤ x,
𝐄[s(D̂, 0)] - x + x - 𝐄[s(D̂, 1)] if 𝐄[s(D̂, 0)] ≥ x ≥𝐄[s(D̂, 1)],
x - 𝐄[s(D̂, 0)] + 𝐄[s(D̂, 1)] - x if 𝐄[s(D̂, 1)] ≥ x ≥𝐄[s(D̂, 0)].
In the last two cases, occurrences of x cancel out and we have precisely 2 ϵ≥Δ(𝔾̂), precisely. Then, in the first case, we have
𝐄[s(D̂, 0)] - x + 𝐄[s(D̂, 1)] - x ≥𝐄[s(D̂, 0)] - μ + 𝐄[s(D̂, 1)] - μ = M - μ.
In the second case, we also have
x - 𝐄[s(D̂, 0)] + x - 𝐄[s(D̂, 0)] ≥ M - 𝐄[s(D̂, 0)] + M - 𝐄[s(D̂, 1)] = M - μ.
Thus, in all cases, we have 2ϵ≥Δ(𝔾̂), the desired result.
§.§ Proof of Thm. <ref>
§.§.§ Proof
Consider an equitable goal 𝔾 with associated test h. Suppose a sample of i.i.d. human data is collected 𝕊 = (C̃_i,D̃_i)_i=1^m; (C̃_i, D̃_i) ∼ℍ. Suppose ℍ is context aware and preserves context. Then, for all δ > 0, with probability at least 1-δ, for all θ, 2β×𝐓𝐃_𝔾(θ) is bounded above by
1/m∑_i=1^m |h(D̃_i, Ã_i)_human - h(D̂'_i, Ã_i)_predicted| + √(log|Θ| + ln 2 / δ2m)_data efficiency
where β = min_a 𝐏𝐫(Ã = a), D̂'_i ∼ℙ_θ(C̃). As noted in the main text we also pose the requirement of pairwise independence: first, between D, D̂, and A in the definition of 𝐓𝐃_𝔾 (conditional to C); second, between D̃_i, D̂'_i, and Ã_i (again, conditional to the context C̃_i).
First, we enumerate some of the key assumptions for easy reference:
* (A1): ℍ is context aware
* (A2): ℍ is context preserving
* (A3): D, D̂, A are independent conditional to C; and, D̃_i, D̂'_i, Ã_i are independent conditional C̃_i
* (A4):[Here, we are using the same shorthand from the main text; e.g., in Def. <ref>.] 𝐏𝐫(D̂ | C) = 𝐏𝐫(D̂^' | C̃) since both probabilities represent identical sampling from ℙ_θ
* (A5): 𝐏𝐫(A | C) = 𝐏𝐫(Ã | C̃) since both probabilities represent identical sampling from 𝔸
Now, we consider decomposing the joint probability density 𝐏𝐫(D=d, D̂=d̂, A=a), which importantly, is the joint density used to compute the expectation in 𝐓𝐃_𝔾(θ).[We ignore U since it is unused in this paper. The proof would be more complicated, but similar had we included U.] To begin, we have
𝐏𝐫(D=d, D̂=d̂, A=a) = ∑_c𝐏𝐫(C=c) 𝐏𝐫(D=d, D̂=d̂, A=a | C = c) (Law of Total Exp.)
= ∑_c𝐏𝐫(C=c) 𝐏𝐫(D=d | C = c)𝐏𝐫(D̂=d̂| C = c)𝐏𝐫(A=a | C = c) (A3)
= ∑_c𝐏𝐫(C=c)/𝐏𝐫(C̃=c)𝐏𝐫(C̃=c) 𝐏𝐫(D=d | C = c)𝐏𝐫(D̂=d̂| C = c)𝐏𝐫(A=a | C = c) (×1 trick)
= ∑_c𝐏𝐫(C=c)/𝐏𝐫(C̃=c)𝐏𝐫(C̃=c) 𝐏𝐫(D̃=d |C̃ = c)𝐏𝐫(D̂=d̂| C = c)𝐏𝐫(A=a | C = c) (A1)
= ∑_c𝐏𝐫(C=c)/𝐏𝐫(C̃=c)𝐏𝐫(C̃=c) 𝐏𝐫(D̃=d |C̃ = c)𝐏𝐫(D̂^'=d̂|C̃ = c)𝐏𝐫(A=a | C = c) (A4)
= ∑_c𝐏𝐫(C=c)/𝐏𝐫(C̃=c)𝐏𝐫(C̃=c) 𝐏𝐫(D̃=d |C̃ = c)𝐏𝐫(D̂^'=d̂|C̃ = c)𝐏𝐫(Ã=a |C̃ = c) (A5)
= ∑_c𝐏𝐫(C=c)/𝐏𝐫(C̃=c)𝐏𝐫(C̃=c) 𝐏𝐫(D̃=d, D̂^'=d̂, Ã=a |C̃=c) (A3)
Further, we can relate the probability distributions for the contexts C and C̃ through their implied attribute distributions via (A2)
𝐏𝐫(C=c) = ∑_a 𝐏𝐫(C = c | A = a) 𝐏𝐫(A = a) (Law of Total Exp.)
= ∑_a 𝐏𝐫(C̃ = c |Ã = a) 𝐏𝐫(A = a) (A2)
= ∑_a 𝐏𝐫(C̃ = c |Ã = a) 𝐏𝐫(Ã = a) ·𝐏𝐫(A = a)𝐏𝐫(Ã = a) (×1 trick)
≤∑_a 𝐏𝐫(C̃ = c |Ã = a) 𝐏𝐫(Ã = a) ·12β (balance of 𝔾 and def. of β)
= 12β𝐏𝐫(C̃=c)
Applying this to our previous outcome, we have
∑_c𝐏𝐫(C=c)𝐏𝐫(C̃=c)𝐏𝐫(C̃=c) 𝐏𝐫(D̃=d, D̂^'=d̂, Ã=a |C̃=c)
≤∑_c12β𝐏𝐫(C̃=c) 𝐏𝐫(D̃=d, D̂^'=d̂, Ã=a |C̃=c)
= 12β𝐏𝐫(D̃=d, D̂^'=d̂, Ã=a) (Law of Total Exp.).
Notice, the new joint density 𝐏𝐫(D̃=d, D̂^'=d̂, Ã=a) can be used to compute the expectation in 𝐓𝐃_ℍ, while the previous joint density was used to compute the expectation in 𝐓𝐃_𝔾. Both expectations have everywhere non-negative variables. So, ultimately, the relation between the joint densities gives:
𝐓𝐃_𝔾(θ) ≤12β𝐓𝐃_ℍ(θ)
To complete the proof, we need to bound the true test divergence on the human data 𝐓𝐃_ℍ(θ) with our observation 𝐓𝐃_𝕊(θ). To do so, without using a test set, we need to apply a PAC learning bound for parameters selected from a finite hypothesis space (i.e., so that the result holds for any θ learned from Θ). We choose the structural risk minimization bound presented in <cit.> – i.e., Thm. 7.7 – and apply it to our context,[To apply the theorem, we define the prefix free description language for Θ by simply enumerating each parameter in Θ (arbitrary order) and then mapping each parameter to the binary expansion of its assigned numeral. The loss needs to be replaced with the test divergence as well, but with this replacement, the required uniform convergence property for each individual parameter is still given by Hoeffding’s Inequality, so the proof as a whole is unchanged beyond this simple substitution.] which gives the final result.
§.§.§ Remarks on Data Efficiency
Note, the last step of the proof can be applied directly to 𝐓𝐃_𝔾(θ) as well, or any other instance of the test divergence for that matter. In the main text, when we refer to the data-efficiency of augmentation strategies, it is important to note that these augmentation strategies can change the distribution over which we compute test divergence. Although this distribution and the resulting test divergence may change, the data-efficiency term will be effected equally.[Some strategies for measuring data-efficiency depend on the data – our comment excludes these.] For example, consider downsampling – a simple augmentation strategy used in the experiments. In this case, if one downsamples to achieve balance in the frequency of the protected attribute, the data efficiency term would change from √(log|Θ| + ln 2 / δ2m) to √(log|Θ| + ln 2 / δ2α m), where α is fraction of data remaining after downsampling. In an ideal case, where there is only one protected attribute to consider during re-balancing, we have α = 2β and the data efficiency is reduced by a factor of 1 / √(2β), compared to no augmentation. The reader may notice based algorithms also experience a reduction in data-efficiency by the slightly larger factor of 1 / 2β applied to the whole bound; i.e., see Eq. (<ref>). With this said, the reason we allude to worse data-efficiency overall for augmentation strategies is that these strategies typically also re-use data to define the augmentation; e.g., in the mentioned case, where one downsamples for balance, an additional data-efficiency term must be added to the bound to measure the impact of estimating β from training data prior to conducting the downsampling.[If this added term is γ times the original data-efficiency, the inflation in Eq. (<ref>) actually becomes smaller than the inflation caused by data augmentation, whenever β > 1 / 2 γ^2.] Additional reduction can also be induced from imperfect estimation of β, and furthermore, when there is more than one protected attribute to consider. In the latter case, we may need to reduce the effective dataset size α m further to simulate balance (as in the later experiments; see Appendix <ref>). Thus, depending on the problem, these compounding effects can easily lead to reduced efficiency overall; i.e., compared to basic application of based algorithms without augmentation on the whole dataset. Due to the complexity of this comparison, which is dependent on augmentation strategies, estimation error, etc., we leave formal comparison to future work and simply conjecture on the potential for worse data-efficiency of data augmentation strategies in the main text. Albeit, this hypothesis is confirmed in experiments throughout Section <ref>, and it should be noted our main argument here is that the data-efficiency of augmentation strategies needs to be considered, where it has previously not been in most literature.
§.§.§ Assumption of Pairwise Independence
As mentioned in the main text, the assumption of pairwise independence is not an overly strong assumption. Conditional to the context C, pairwise independence stipulates realizations of the random values D, D̂, and A do not provide additional information about each other once we know C=c. For example, in GuessWhat?!, knowing the gender does not impact our expectation of the QA pairs, once the image is already known. Alternatively, knowing predicted QAs does not change our expectation about human QAs, after the image is known. The latter is not so intuitive, but independence of predictions on (test) outcomes and the outcomes themselves is common among many simple learning models (e.g., fixed effects linear regression) since the learned parameters are only dependent on the i.i.d. training outcomes.
§.§ Labeling Scheme
As noted, the labeling scheme for the protected attribute studied in the main text allows us to satisfy some of the key assumptions (on the human data) stipulated by Thm. <ref>: context awareness (Def. <ref>) and context preservation (Def. <ref>). To see this, we show that there exists an equitable goal according to score parity with scoring function defined in Eq. (<ref>), and importantly, that this equitable goal is related to the human data as specified by Defs. <ref> and <ref>. In turn, the existence of such an equitable goal implies that the human data and scoring function we study in the experiments does indeed satisfy Def. <ref> and Def. <ref>.
Construction of Goal To begin, consider some random variables (D, C, A) with the below constraints, and let (D̃, C̃, Ã) correspond to random variables for the human data as before. These will be used to construct the equitable goal we have just previously discussed:
𝐏𝐫(D = d | C = c) = 𝐏𝐫(D̃ = d |C̃ = c),
𝐏𝐫(C = c | A = a) = 𝐏𝐫(C̃ = c |Ã = a),
𝐏𝐫(A = 0) = 𝐏𝐫(A = 1).
Now, also assume D is independent of A given C (that is, A3 in Thm. <ref>), so we can decompose the joint distribution of (D, C, A) according to our constraints:
𝐏𝐫(D=d, C=c, A=a) = 𝐏𝐫(D=d, C=c | A=a) 𝐏𝐫(A=a)
= 𝐏𝐫(D=d | C=d, A=a) 𝐏𝐫(C=c | A=a) 𝐏𝐫(A=a)
= 𝐏𝐫(D=d | C=c) 𝐏𝐫(C=c | A=a) 𝐏𝐫(A=a) (cond. indep. constraint A3)
= 𝐏𝐫(D̃=d |C̃=c) 𝐏𝐫(C̃=c |Ã=a) 𝐏𝐫(A=a) (Eq. <ref> constraints)
Next, we verify there are distributions with this joint density with total probability summing to 1. To do this, we re-use the above expansion to arrive at:
∑_d,c,a𝐏𝐫(D=d, C=c, A=a)
= ∑_d,c,a𝐏𝐫(D̃=d |C̃=c) 𝐏𝐫(C̃=c |Ã=a) 𝐏𝐫(A=a)
= 1/2∑_d,c,a𝐏𝐫(D̃=d |C̃=c) 𝐏𝐫(C̃=c |Ã=a) (assumed constraint on A)
:= 1/2 [ x(1) + x(0) ] (use x(a) as a shorthand for the sum over d,c)
Simultaneously, since (D̃, C̃, Ã) already correspond to a distribution, we can use similar logic (i.e., LTE and conditional independence) to expand the sum over this distribution's joint density. In doing so, we must have
1 = 𝐏𝐫(Ã = 0) · x(0) + 𝐏𝐫(Ã = 1) · x(1) := a × x(1) + b × x(0) (defining shorthand).
So, the density in Eq. (<ref>) has total probability summing to 1 if there is a solution with a,b ∈ [0,1] and a + b = 1 to the following system:
1 = 1/2 [ x(1) + x(0) ]
1 = a × x(1) + b × x(0).
If a ≠ b ≠ 1/2, there are solutions a,b ∈ [0,1] with a+b=1 as long as x(1) = x(0), which is indeed true, since due to (A3) x(a) can be re-written as a conditional joint probability over D̃ and C̃. So, x(1) = x(0) = 1. Note, the other axioms of probabilities follow directly because the constraints only restrict the probabilities for (D,C,A) to existing (known) probability functions. Thus, we know a distribution satisfying the needed constraints in Eq. (<ref>) exists. Specifically, a distribution related to the human data as specified by Defs. <ref> and <ref> exists, and we have shown the desired result.
Equity of Goal Finally, it remains to see how the distribution corresponding to (D,C,A) is equitable. Score parity follows easily by definition of à = v(D̃). In particular, the test divergence on the human data is 0, so Eq. (<ref>) implies the test divergence on the distribution of (D,C,A) is 0, and so Thm. <ref> implies the parity gap for the distribution of (D,C,A) is 0. Balance of the distribution of (D,C,A) also follows easily from the final constraint in Eq. (<ref>), and so we are done.
§.§ Downsampling
The downsampling process for the algorithm restricts to images which are determined to have either of the protected attributes — i.e., a=1 when M is the protected attribute or a=1 when F is the protected attribute — such that there are an equal number of occurrences of a=1 for both protected attributes. That is, in the end result, the new training dataset has an equal number of occurrences where annotator consensus identified a male or a female, and all other images are thrown out. This is achieved through a simple randomized filtering approach. As noted, images without a=1 for either protected attribute are also thrown out. This allows us to ensure we are training a (single) model that will be equitable on both protected attributes simultaneously,[If we include images without labels, we cannot be sure of equal occurrence of both attributes.] which is the primary goal in evaluation. Note, this strategy does not hurt the object identification accuracy either (as evidenced by empirical results). This may be for two reasons: first, other objects (besides persons) appear frequently enough in the downsampled dataset as to not effect performance; second, downsampling is only used in the cooperative learning phase, and object recognition ability is primarily learned in the pre-training phase. As alluded in our theoretical discussion, another consequence of this augmentation strategy is that the number of i.i.d. data points is greatly reduced in the cooperative learning phase (e.g., compared to the -based algorithm); i.e., we estimate less than 1/6th of the original dataset is used. Therefore, this indeed presents a good example to test our theoretical hypotheses on the impacts of data augmentation and data-inefficiency.
Downsampling to create the equitable distribution is done in a similar manner, except – since we don't need to worry about inefficiency in model training any longer – a separate dataset is created for each protected attribute. So, there is one dataset with balanced occurrences of a=1 and a=0 when the protected attribute is M, and another dataset with balanced occurrences when the attribute is F. Importantly, because labeling scheme enforces our assumptions about context hold in the human data (see Appendix <ref>), this should create an equitable goal.
§.§ GuessWhat?! Game Rules and Statistics
Here, we introduce the GuessWhat?! visual dialogue game <cit.>. We use this game as a running example to ground abstract theoretical concepts in practical application. Importantly, our theoretical study is more generally applicable (i.e., beyond just this example). Statistics on object distribution and dialogue length are provided in Figure <ref>. After applying the labeling scheme and downsampling (as just described), our dataset consists of about 3200 (half with a=1) when F is the protected attribute and 6400 (half with a=1) when M is the protected attribute. Note, this also indicates that the ratio of M to F in the original dataset is about 2 to 1.
Gameplay An image and goal-object within the image are both randomly chosen.
A question-player with access to the image asks yes/no questions to an answer-player who has access to both the image and goal-object.
The question-player's goal is to identify the goal-object. The answer-player's goal is to reveal the goal-object to the question-player by answering the yes/no questions appropriately.
The question- and answer-player converse until the question-player is ready to make a guess or at most m questions have been asked.[By default, m=8 following <cit.>.] The question-player then guesses which object was the secret goal.
§.§ Cooperative Learning
Cooperative Learning generates questions Q̂_i and object guess Ô based on answer player answers A_i as below:
Ô = 𝙶𝚞𝚎𝚜_α(𝙴𝚗𝚌_β(I, D̂))
Q̂_i+1 = 𝚀𝙶𝚎𝚗_θ(𝙴𝚗𝚌_β(I, Q̂_1, A_1, …Q̂_i, A_i).
The neural-model 𝚀𝙶𝚎𝚗_θ is called the question-generator and the neural-model 𝙶𝚞𝚎𝚜_α is called the object-guesser. The final neural-model 𝙴𝚗𝚌_β is called the encoder and captures pertinent features for the former models to share.
All model parameters (α, β, θ) are first pre-trained on human-human dialogue and then the model-components are further updated through cooperative self-play <cit.>, in which the model-components and an automated answer-player play new games (machine-machine dialogue) to continue the learning process. The shared encoder is used to improve human-likeness of questions <cit.>.
Note, the change from Cooperative Learning (above) to Cooperative Learning with simply incorporates additional human data during training the above model, instead of using only machine-machine dialogue. See <cit.> for more details on both approaches to cooperative learning.
|
http://arxiv.org/abs/2307.04141v1 | 20230709100539 | Gray-body factor and absorption of the Dirac field in ESTGB gravity | [
"Qian Li",
"Chen Ma",
"Yu Zhang",
"Zhi-Wen Lin",
"Peng-Fei Duan"
] | gr-qc | [
"gr-qc"
] |
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
[email protected] Corresponding author Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
City College, Kunming University of Science and Technology, Kunming, Yunnan 650051, China.
The gray-body factor and the absorption cross section of the 4D ESTGB gravity with a mode of nonlinear electrodynamics for the massless Dirac field are studied in this paper. The magnetic charge value varies between -2^(5/3)/3 and 0 as well as the ADM mass is set to 1, which corresponds to a non-extreme black hole. The gray-body factor is obtained using the semi-analytic WKB method after solving the massless Dirac equation. When the absolute value of magnetic charge is increasing, the gray-body factor γ(ω) is decreasing. In addition, the partial absorption cross section and the total absorption cross section are calculated by using the partial wave method. We find that the maximum value of partial absorption cross section decreases as κ increases. And the existence of magnetic charge causes the diminishing of the total absorption cross section. Finally, we find that the absorption cross section of the Dirac field is more sensitive to electric charge than magnetic charge by comparing the absorption cross section of the Reissner-Nordström and ESTGB-NLED black holes.
Gray-body factor and absorption of the Dirac field in ESTGB gravity
Peng-Fei Duan
August 12, 2023
===================================================================
§ INTRODUCTION
In the decades after Einstein's general theory of relativity predicted the existence of black holes, the research on the related properties of black holes has gradually become a frontier hot issue in astrophysics. An important feature of black holes is that no information can escape from its event horizon. As a result, the existence of black holes can only be detected by a lot of the indirect methods in the previous decades. Nevertheless, with the development of technology, the Event Horizon Telescope Cooperation Organization has successfully captured the first image of a black hole in the center of the M87 galaxy <cit.>. This picture directly proves the existence of black holes in the universe. However, a lot of phenomenons cannot be explained by the basic general relativity. These phenomenons include but are not limited to the acceleration expansion stage of the universe <cit.>, the combination of gravity and the laws of quantum physics <cit.>, and the flatness of the spiral curve for a spiral galaxy <cit.>. Accordingly, this presents a wide room for altering and interpreting theories of gravity, such as the so called extended scalar-tensor-Gauss-Bonnet(ESTGB) <cit.>, which is a theory in four dimensions. Specifically, the scalar field is coupled with Gauss-Bonnet invariant in ESTGB to avoid the Ostrogradsky instability <cit.>. The black hole solutions have been presented by solving the complex field equation in ESTGB gravity without matter field in four dimensions. In addition, these numerical solutions are also given in the context of different matter fields, for instance, massive scalar <cit.>, charged case <cit.>, Dilatonic <cit.>, multi-scalar<cit.>, and a particular form of nonlinear electrodynamics <cit.>.
Black hole that is not an isolated system will interact with the surrounding environment. Interactions present more interesting phenomena, such as radiation, absorption, and scattering. Therefore, we can study how black holes interact with the surrounding environment to obtain interrelated information about special objects. In addition, the experiments to explore black holes rely largely on GW astronomy, shadow images and X-ray spectroscopy. In a way, all three aspects depend on the effect of black holes on the environment. As we all know, the accretion plays a non-negligible role in the phenomenology of active galactic nuclei <cit.>. Accretion of fundamental field, i.e., scalar field, electromagnetic field and Dirac field etc., is usually associated with the research of absorption cross section. So it is necessary to research the absorption of waves and particles by black hole. Since the 1960s, theorists have begun to study the problems related to scattering. Moreover, the gray-body factor can help us understand the absorption and scattering of particles, and it is also an important factor to solve Hawking radiation. Hawking radiation <cit.> proposed by Hawking in 1976, which depends on the gray-body factor and black hole temperature, is of crucial importance when studying the black hole information paradox. It may be the most difficult obstacle to a thorough understanding of quantum gravity. The gray-body factor is defined as the possibility that the incident particle with frequency ω is absorbed by the black hole, which encodes valuable information about the near-horizon structure and correlative physics of the black hole. And it is used to measure the deviation from the radiation of the ideal and perfect black-body <cit.>. Many authors <cit.> have also studied the gray-body of various black holes with the different methods.
A plethora of methods have been proposed to calculate the gray-body factor with different accuracy, including the new cancellation between contributions to the wave function for the different spin particles <cit.>, the exact numerical method <cit.>, the rigorous bounds for the gray-body factor <cit.>, the WKB method<cit.>, etc. The WKB approximation stands out among the above-mentioned various calculation methods because of its versatility and flexibility. Blome, Hans-Joachim and Mashhoon <cit.> proposed the first simple semi-analytic formula to calculate the quasinormal frequency by matching the effective potential with the inverse Pöschl-Teller potential. However, this formula fails to improve the accuracy for the lower multi-pole numbers. A year later, Schutz and Will <cit.> calculated the quasinormal modes using the WKB approximation based on Mashhoon's formula. The method is to match the WKB solution with the Taylor expansion that passes two turning points. Subsequently, Iyer, Sai and Will <cit.> introduced the third-order formula of the WKB method, which improved the accuracy up to one percent on the basis of Schutz and Will. Moreover, Konoplya <cit.> and Matyjasek et al. <cit.> proposed higher WKB order terms. The WKB approximation can calculate not only the gray-body factor, but also the quasinormal mode. The quasinormal modes <cit.> containing complex frequencies represent the response of black holes to external perturbations such as massless scalar fields, neutrino fields, gravitational fields, electromagnetic fields, Dirac fields, etc.
In the 1970s, Hawking discovered that the evaporation rate of a black hole is directly proportional to its absorption cross section <cit.>. Subsequently, a wealth of important researches on the absorption and scattering of plane waves acting on black holes were established in the 1970s and 1980s. For instance, Sanchez <cit.> indicated that the absorption cross section of a Schwarzschild black hole for the massless scalar field is oscillatory with respect to the geometry-optical limit (27/4)π r_s^2, and Unruh <cit.> studied the absorption in the massive scalar. Besides, Crispino <cit.> presented the absorption of electromagnetic waves in the Schwarzschild spacetime for arbitrary frequencies, and Jung <cit.> studied the absorption of massive scalar field for the Reissner-Nordström spacetime. Next, the result of absorption of electromagnetic waves was obtained in Ref. <cit.>. In addition, the absorption of massless scalar by kerr spacetime was investigated in Ref. <cit.>, and the absorption of electromagnetic waves was analyzed in Ref. <cit.>. Liao Hao, Songbai Chen et al. <cit.> analyzed the absorption and Hawking radiation of electromagnetic waves with Weyl correction in 4D black hole spacetime. A lots of authors <cit.> have also studied the absorption and scattering cross sections of various black holes. In this paper we will study the gray-body factor and absorption cross section of the black hole in ESTGB gravity with an unusual form of nonlinear electrodynamics in four dimensions.
This paper is organized as follows. The second section outlines the basic information of the four-dimensional (4D) extended scalar-tensor-Gauss-Bonnet theory (ESTGB) coupled with a special form of nonlinear electrodynamics, and the settings of related parameters are also given. In the third section, the massless Dirac equation is reduced to master wave equations and the effective potential is analyzed. Next, the gray-body factor is calculated using the WKB method in the fourth section. The fifth section presents the expression of the absorption cross section of the Dirac field and the corresponding results are also given. Summary and conclusions are presented in the last section.
§ THE BLACK HOLE SOLUTION IN THE EXTENDED SCALAR-TENSOR-GAUSS-BONNET GRAVITY
Without loss of generality, we adopt natural units in this paper, namely c = G = ħ = 1. The 4D extended scalar-tensor-Gauss-Bonnet theory coupled with a particular form of nonlinear electrodynamics (ESTGB-NLED) <cit.> is defined as follows,
S = ∫ d^4x √(-g){1/16π(R - 1/2∂_μϕ∂^μϕ + f(ϕ) R__GB^2 - 2 U (ϕ) )
- 1/4πℒ_ matter}.
where R is the Ricci scalar, ϕ is the scalar field, f(ϕ) is a coupling function that depends only on ϕ, R__GB^2 is the Gauss-Bonnet term and U (ϕ) means the scalar field potential. The first term is the Einstein-Hilbert Lagrangian density. The Lagrangian density ℒ_ matter represents any matter field. Assuming that the metric is static, then we have the following spherically symmetric form,
ds^2=-f(r)dt^2+f^-1(r)dr^2+r^2dΩ^2,
with
f(r)=1-2m/r-q^3/r^3,
where m represents the ADM mass and q indicates the magnetic charge. Supposing that both the effective energy-momentum tensor and the corresponding NLED energy-momentum tensor satisfy the weak energy condition, we can obtain that m>0 and q<0. The metric is similar to those used in the Reissner-Nordström black hole that retains two horizons, one horizon, or none. Moreover, the Schwarzschild black hole is recovered when q is set to 0. Without loss of generality, we consider the black hole is non-extreme <cit.> satisfying the value of 0 > q/m > -2^(5/3)/3. If we set the ADM mass m=1 and then the magnetic charge -2^(5/3)/3<q<0.
For later comparisons, it is beneficial to explicitly mention the Reissner-Nordström black hole solution. The metric of Reissner-Nordström black hole is given by
ds^2=-(1-2M/r+Q^2/r^2)dt^2+(1-2M/r+Q^2/r^2)^-1dr^2+r^2dΩ^2,
where Q is the electric charge of the black hole and M is the mass of black hole.
§ MASTER WAVE EQUATION IN DIRAC FIELD
In this section, we reduce the massless Dirac equation in the black hole spacetime to a series of Schrödinger-like radial equations, and analyze the properties of the effective potential. The massless Dirac equation in black hole spacetime has the following form according to Ref. <cit.>,
γ^α e_α^μ( ∂_μ + Γ_μ) Ψ= 0,
where γ^α mean the Dirac matrices, defined as follows,
γ^0=(
[ -i 0; 0 i ]),
γ^i=(
[ 0 -iσ^i; iσ^i 0 ]),
with σ_i representing the pauli matrix, for any i ∈{1,2,3}, respectively.
Moreover, e_α^μ is the inverse of the tetrad e_μ^α, of which the particular form is defined by the metric g_μν,
g_μν=η_ab e_μ^a e^b_ν,
where η_ab is the Minkowski metric, and η_ab=diag(-1,1,1,1). Additionally, Γ_μ is the spin connection defined by
Γ_μ=1/8[γ^a,γ^b] e_a^ν e_bν;μ,
with e_bν;μ=∂_μ e_bν-Γ_μν^α e_bα being the covariant derivative of e_bν, where Γ_μν^α is the Christoffel symbol.
In the spacetime of a static and spherical black hole, e_μ^a can be expressed as
e_μ^a=diag(√(f),1/√(f),r,rsinθ).
Hence, the components of Γ_μ can be obtained by substituting equation (<ref>) into equation (<ref>) as follows,
Γ_0=1/4 f^'γ^1γ^0,Γ_1=0,Γ_2=1/2√(f)γ^1γ^2,Γ_3=1/2(sinθ√(f)γ^1γ^3+cosθγ^2γ^3).
Further substituting the above expressions into the Dirac equation (<ref>), the Dirac equation becomes
γ^0/√(f)∂Ψ/∂t+√(f)γ^1(∂/∂r+1/r+1/4f df/ dr)Ψ+γ^2/r(∂/∂θ+1/2θ)Ψ+γ^3/rsinθ∂Ψ/∂ψ= 0.
Furthermore, we can transform equation (<ref>) into the following equation
γ^0/√(f)∂Φ/∂t+√(f)γ^1(∂/∂r+1/r)Φ+γ^2/r(∂/∂θ+1/2θ)Φ+γ^3/rsinθ∂Φ/∂ϕ= 0,
by defining a tortoise coordinate change as
r_⋆=∫dr/f,
introducing the ansatz as
Φ=(
[ i G^±(r)/rϕ_jm^±(θ,φ); F^±(r)/rϕ_jm^∓(θ,φ) ]) e^-iwt,
and defining spinor angular harmonics as
ϕ_jm^+=(
[ √(l+1/2+m/2l+1)Y_l^m-1/2; √(l+1/2-m/2l+1)Y_l^m+1/2 ]), (j=l+1/2),
ϕ_jm^-=(
[ √(l+1/2-m/2l-1)Y_l^m-1/2; -√(l+1/2+m/2l-1)Y_l^m+1/2 ]), (j=l-1/2).
As a result, equation (<ref>) can be rewritten as
(
[ 0 -ω; ω 0 ])
(
[ F^±; G^± ])-∂/∂r_⋆(
[ F^±; G^± ])+√(f)(
[ k_±/r 0; 0 -k_±/r ])
(
[ F^±; G^± ])= 0,
where the different cases for (+) and (-) in the function F^± and G^± are given by <cit.>
d^2F/dr_⋆^2+(ω^2-V_1)F= 0,
d^2G/dr_⋆^2+(ω^2-V_2)G= 0,
with
V_1 = √(f)|κ|/r^2(|κ|√(f)+r/2df/dr-f),(for κ=j+1/2, and j=l+1/2),
V_2 = √(f)|κ|/r^2(|κ|√(f)-r/2df/dr+f),(for κ=-(j+1/2), and j=l-1/2).
It is worth noting that the potentials V_1 and V_2 are super-symmetric partners <cit.>, and they are derived from the same super-potential. It is well established that the potentials V_1 and V_2 related in this way have the same spectra. Therefore, we only need to consider the effective potential V_1 in calculating the gray-body factor and the absorption cross section for the massless Dirac field by the WKB approximation. As a result, the equation (<ref>) can be written as
d^2ψ/dr_⋆^2+(ω^2-V_eff)ψ=0.
Note that Eq.(<ref>) is a Schrödinger-like equation with an effective potential V_eff. The effective potential is depicted in Fig.<ref> when we consider κ as a variable with the fixed q=-0.8. We can observe from Fig.<ref> that the height of the effective potential barrier becomes larger if κ is increased. Furthermore, the location of the peak point moves towards the right with increasing κ. In addition, we compare the effective potential V_eff in Fig.<ref>(a) with κ = 5 under four scenarios, i.e., q=-0.4, q=-0.8, q=-1.0 and q=-2^5/3/3 respectively. It can be seen that, when the absolute value of the magnetic charge is increased, the height of the effective potential barrier increases and then the value diminishes and converges to almost the same value as r increasing. Because this metric is similar to the Reissner-Nordström spacetime, we compare the change of the effective potential for the two black hole when κ is set to 5. It is obvious in Fig.<ref>(b) that the maximum value of the effective potential of Reissner-Nordström spacetime increases faster with electric charge than that of ESTGB-NLED spacetime increases with magnetic charge. This means that the effective potential is more sensitive to the electric charge. Finally, it is worth noting that the effective potential has the form of a single-peak positive definite potential barrier, because it inclines to zero as r→∞. In other words, it vanishes at the event horizon r_+.
§ GRAY-BODY FACTOR
In this section, we are going to discuss the grey-body factor for the Dirac field in 4D ESTGB gravity with a nonlinear electrodynamics, i.e., we calculate the reflection probability and transmission probability. The discussion bases on the six-order WKB method for the different value of the magnetic charge.
Hawking <cit.> predicted that when the temperature of the black hole is proportional to its surface gravity, the black hole will emit particles, which behaves almost like a black body. So black holes are the thermal systems that has an associated temperature and entropy. Therefore black holes produce radiation when thermodynamic laws is satisfied and take into account the quantum effects <cit.>. Hawking proposed the expression that the evaporation rate of a black hole in a mode with frequency ω at the event horizon,
Γ(ω)=1/e^βω± 1d^3 k/(2π)^3,
where β is the inverse of the black hole temperature, i.e., 1/T_BH, and the plus as well as minus sign are the emission of the fermions and bosons. However the emission rate, which is measured by the spectator located far away, could be affected by the geometry situated on the outside of the event horizon. That is to say, the geometry situated on the outside of the event horizon is going to serve as a potential barrier for the radiation that emits from a black hole. The strong gravitational potential near the event horizon of the black hole will scatter part of the radiation particles back to the black hole, that is, part of the radiation is reflected back to the black hole. Another part of the particles will pass through the gravitational potential due to the quantum tunneling effect, fly to infinity, and be measured by the remote observer. And the radiation, which reaches the remote observer through the potential barrier, will no longer appear as the form of a black body. Hence we can rewrite the expression of the emission rate recorded by the remote observer with the frequency ω as,
Γ(ω)=γ(ω)/e^βω± 1d^3 k/(2π)^3,
γ(ω) is the gray-body factor related to frequency in the action.
The gray-body factor is defined as,
γ(ω)=|𝒯_ω l|^2.
The solution of the second-order differential equation (<ref>), with the postulation of purely outgoing waves at infinity and purely incoming waves at the event horizon, has the following boundary conditions:
ψ(r_⋆)∼{ℐ_ω l e^-iω r_⋆ +ℛ_ω l e^iω
r_⋆,
r_⋆ →+∞,
𝒯_ω l e^-iω r_⋆,
r_⋆→-∞ .
.
Where ℛ_ω l and 𝒯_ω l are the reflection coefficient and transmission coefficient, respectively. Due to the conservation of flux,
ℛ_ω l and 𝒯_ω l satisfy the following relationship
|ℛ_ω l|^2+|𝒯_ω l|^2=|ℐ_ω l|^2.
The phase shift δ_l can be expressed by
e^-2 iδ_l=(-1)^l+1ℛ_ω l/ ℐ_ω l.
Now, we discuss the use of WKB method to determine the gray-body factor <cit.>. The gray-body factor relies on the special relation between
ω and V_m, which V_m is the peak value of the effective potential V(r). There are three cases with ω^2≫ V_m, ω^2≈ V_m and ω^2≪ V_m to consider: when ω^2≫ V_m, i.e., the wave will not be reflected back to the black hole by the barrier when the wave with a frequency ω is higher than the height of the potential barrier. So the transmission probability 𝒯_ω l is almost equal to one. And the reflection probability ℛ_ω l is close to zero. Almost all of the radiation pass the potential and fly to infinity under this condition. When ω^2≪ V_m, the transmission probability 𝒯_ω l is almost equal to zero as well as the reflection probability ℛ_ω l is close to one. This means that almost all of the radiation is reflected back to the black hole by the potential. When ω^2≈ V_m, we compute the gray-body in the limit since the highest precision value is obtained in the WKB approximation.
Under the WKB approximation, when incident probability ℐ_ω l is equal to one the reflection coefficient can be expressed by
ℛ_ω l=(1+e^-2π i α)^-1/2,
𝒯_ω l=√(1-|(1+e^-2π i α)^-1/2|^2).
Where α is defined by
α=i(ω^2-V_0)/√(-2V_0^(2))-Λ_2-Λ_3-Λ_4-Λ_5-Λ_6.
Where V_0 is the peak value of the effective potential V(r) at r=r_0. Then,
Λ_2=1/√(-2V^(2)_0)[1/8(V^(4)_0/V^(2)_0)(b^2+1/4)-1/288(V^(3)_0/V^(2)_0)^2(7+60b^2)]
Λ_3=n+1/2/-2V^(2)_0[5/6912(V^(3)_0/V^(2)_0)^4(77+188b^2)-1/384((V^(3)_0)^2V^(4)_0/(V^(2)_0)^3)(51+100b^2)
+1/2304(V^(4)_0/V^(2)_0)^2(67+68b^2)-1/288(V^(6)_0/V^(2)_0)(5+4b^2)+1/288(V^(3)_0V^(5)_0/(V^(2)_0)^2)(19+28b^2)].
In Eqs. (<ref>) and (<ref>), the superscript (2,3,4,5,6) denotes the differentiation with respect to the tortoise coordinate r_⋆ and b=n+1/2. Considering that the expression of Λ_4, Λ_5 and Λ_6 found by <cit.> is overly cumbersome, we do not describe it in detail here.
In order to understand the nature of the transmission and reflection coefficients, we shall plot them with the frequency ω for different values of the magnetic charge. The results for transmission probability are represented in Fig.<ref>(a), where the different values of magnetic charge have been chosen. It can be observed that, for all values of magnetic charge q, the transmission coefficient starts from 0 and reaches 1 when ω is increased. Besides, one can see that the transmission coefficient diminishes as the absolute value of the magnetic charge q increases. In other words, the magnetic charge has the behaviour of obstructing the wave from passing through the black hole. The reason may be that the magnetic charge increases the peak value of the effective potential. Fig.<ref>(b) shows the comparison of transmission coefficient for Reissner-Nordström and ESTGB-NLED black hole. The transmission coefficient of ESTGB-NLED black hole presents the smaller change than that of Reissner-Nordström black hole when we view the magnetic (electric) charge as the variable parameter. We also exhibit the reflection coefficient with κ=5 in Fig.<ref>(c) and compare it with Reissner-Nordström black hole in Fig.<ref>(d). On the contrary, the reflection coefficient starts from 1 and reaches 0 with the increase of ω. Furthermore, as shown in Fig.<ref>, we present the results of the transmission and reflection coefficients in the case where q=-0.8 and κ varies from 1 to 4. When κ is increasing, the reflection coefficient becomes larger and the transmission coefficient becomes smaller. The reason for this behavior is that the peak value of the effective potential increases with the increase of κ.
§ ABSORPTION CROSS SECTION
In this section, we calculate the absorption cross section for the Dirac field, which is defined as the ratio of the number of particles absorbed by the black hole to the incident particle flux. Benone <cit.> proposed the partial wave method to get total absorption cross section as follows,
σ_abs=∑_l=0^∞σ_abs^l,
and the partial absorption cross section is given by
σ_abs^l=π/ω^2(2l+1)(1-|e^-2 iδ_l|^2),
we substitute the phase shift e^-2 iδ_l of different l subwaves into the Eq.(33), then we can get another expression,
σ_abs=π/ω^2∑_l=0^∞(2l+1)(1-|ℛ_ω l|^2).
Considering the effects of the Dirac field, the results of the partial absorption cross section are shown in Fig.<ref> and Fig.<ref>. One can see from Fig.<ref> that the peak value of the partial absorption cross section decreases and the location of the maximum moves to the right when κ is increased. By observing the curves in Fig.<ref>(a) for different magnetic charge q, one can obtain that the partial absorption cross section tends to almost the same value on the low and high frequencies, and the peak value moves slightly to the right when the absolute value of q is increased. In Fig.<ref>(b), we compare the partial absorption cross section of Reissner-Nordström and ESTGB-NLED black holes. We note that the partial absorption cross section of the two types black hole also goes to almost the same value on the low- and high-frequency regions due to the effective potential has the form of a single-peak positive. The effective potential is more sensitive to the electric charge. Specifically, the electric charge can hinder the passage of the wave more. So in the mid-frequency the curve change of the partial absorption cross section of the ESTGB-NLED black hole is not as obvious as that of the Reissner-Nordström black hole when we take the magnetic (electric) charge as the variable.
As shown in Fig.<ref>(a), we draw the results of the total absorption cross section from κ=1 to κ=10 with the magnetic charge q=-0.4, q=-0.8, q=-1.0 and q=-2^5/3/3 respectively. We can see that the total absorption cross section increases and then tends to a stable value when we increase the frequency ω. We also obtain that, as we increase the absolute value of the magnetic charge, the total absorption cross section gradually diminishes. That is to say that, the magnetic charge weakens the absorption for the Dirac field. This is in agreement with the results presented in Ref. <cit.>. Additionally, we plot the total absorption cross section for the Reissner-Nordström black hole in Fig.<ref>(b), for comparison purposes.
Compared with the effective potential of Reissner-Nordström black hole, we find that the effective potential of the ESTGB-NLED black hole changes slower. We can say that the magnetic charge has a smaller effect on the total absorption cross-section of the Dirac field than the electric charge. Therefore, the variation of total absorption cross section of the ESTGB-NLED black hole with the magnetic charge is not as pronounced as for the Reissner-Nordström black hole black hole with the charge.
§ CONCLUSIONS
In the preceding sections, we have studied the gray-body factor and the absorption cross section for the massless Dirac field of the black hole, which is the solution of the 4D ESTGB gravity in the context of the nonlinear electrodynamics. Due to the fact that the solution is characterized by the ADM mass m and the magnetic charge q, the black holes will have different structure according to the different choices of these parameters. Therefore, in order to consider the generality, we have studied the case that the black hole is non-extreme where m=1 and -2^(5/3)/3 < q < 0, which is similar to the Reissner-Nordström spacetime. Specifically, we have plotted the effective potentials in Fig.1 and Fig.2 for two cases. We have found that the effective potential of the Dirac field is more sensitive for the electric charge owing to that the variation of the effective potential of Reissner-Nordström spacetime is more obvious than that of ESTGB-NLED spacetime. Besides, we have carried out numerical calculations to get the gray-body factors using the sixth-order WKB approximations. We have shown the changes of the transmission and reflection coefficients with respect to the magnetic charge in Fig.3, respectively. We have observed that due to the magnetic charge the reflection coefficient is increasing and the transmission coefficient is decreasing comparing the Schwarzschild spacetime. In other words, we have obtained that the magnetic charge impedes the wave from passing the black hole. Moreover, when κ is increasing, the reflection coefficient becomes larger and the transmission coefficient becomes smaller. It has been discovered in Fig.<ref>(a) that the total absorption cross section of the Dirac field decreases when we augment the absolute value of the magnetic charge, but increases with the increasing of frequency. Finally, in Fig.<ref>(b), we have compared the total absorption cross section for the Reissner-Nordström and ESTGB-NLED black hole. It has been found that the absorption cross section of the Dirac field is more sensitive to electric charge than magnetic charge in these two types of black hole.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
§ ACKNOWLEDGMENTS
This work was supported partly by the National Natural Science Foundation of China (Grants No. 12065012, No. 12065013), Yunnan High-level Talent Training Support Plan Young & Elite Talents Project (YNWR-QNBJ-2018-360) and the Fund for Reserve Talents of Young and Middle-aged Academic and Technical Leaders of Yunnan Province (Grant No. 2018HB006).
99
1Akiyama, K. et al.:
Astrophys. J. Lett. 875(2019)L1.
Riess1998
Riess, A.G. et al.:
Astron. J. 116(1998)1009-1038.
Fradkin1985
Fradkin, E.S. and Tseytlin, A.A.:
Nucl. Phys. B 261(1985)1-27.
Capozziello2007
Capozziello, F. et al.:
Mon. Not. Roy. Astron. Soc. 375(2007)1423-1440.
Doneva2018
Doneva, D.D. et al.:
Phys. Rev. D 98(2018)104056.
Woodard02210
Motohashi, H. and Suyama, T.:
JHEP 09(2020)032.
Doneva2019
Doneva, D.D. et al.:
Phys. Rev. D 99(2019)104045.
Kanti1996
Kanti, P. et al.:
Phys. Rev. D 54(1996)5049-5058.
scalariz_Doneva
Doneva, D.D. et al.:
Phys. Rev. D 102(2020)064042.
Canate2020
Cañate, P. and Perez Bergliaffa, S.E.:
Phys. Rev. D 102(2020)104038.
Macedo2014
Macedo, C.F.B. and Crispino, L.C.B.:
Phys. Rev. D 90(2014)064001.
Hawking1976
Hawking, S.W.:
Phys. Rev. D 14(1976)2460-2473.
Kanti2002
Kanti, P. and March-Russell, J.:
Phys. Rev. D 66(2002)024023.
Songbai2010
Chen, S. and Jing, J.:
Phys. Lett. B 691(2010)254-260.
Ama-Tul-Mughani2021
Ama-Tul-Mughani, Q. et al.:
Astropart. Phys. 132(2021)102623.
Sharif2020
Sharif, M. and Ama-Tul-Mughani, Q.:
Phys. Dark Univ. 27(2020)100436.
Qanitah Ama-Tul-Mughani2020
Sharif, M. and Ama-Tul-Mughani, Q.:
PTEP 2020(2020)033E01.
Konoplya2021
Konoplya, R.A.:
Phys. Lett. B 823(2021)136734.
Cvetic1998
Cvetic, M. and Larsen, F.:
Phys. Rev. D 57(1998)6297-6310.
Zhang2018
Zhang, C.Y., Li, P.C. and Chen, B.:
Phys. Rev. D 97(2018)044013.
Miao2017
Miao, Y.G. and Xu, Z.M.:
Phys. Lett. B 772(2017)542-546.
Konoplya2011
Konoplya, R.A. and Zhidenko, A.:
Phys. Lett. B 686(2010)199-206.
Blome1984
Blome, H.J., and Mashhoon, B.:
Phys. Lett. A 100(1984)231-234.
Schutz1985
Schutz, B.F. and Will, C.M.:
Astrophys. J. Lett. 291(1985)L33-L36.
Iyer1987
Iyer, S. and Will, C.M.:
Phys. Rev. D 35(1987)3621.
Konoplya2003
Konoplya, R.A.:
Phys. Rev. D 68(2003)024018.
Matyjasek2017
Matyjasek, J. and Opala, M.:
Phys. Rev. D 96(2017)024011.
Wongjun2020
Wongjun, P. et al.:
Phys. Rev. D 101(2020)124033.
Cai2020
Cai, X.C. and Miao, Y.G.:
Phys. Rev. D 101(2020)104023.
Hu2020
Hu, Y. et al.:
EPL 128(2019)50006.
Liang2018
Liang, J.:
Commun. Theor. Phys. 70(2018)695.
Saleh2018
Saleh, M.,Thomas, B.B. and Kofane, T.C.:
Eur. Phys. J. C 78(2018)325.
Liang20018
Liang, J.:
Chin. Phys. Lett. 35(2018)050401.
Aragon2021
Aragón, A. et al.:
Phys. Rev. D 103(2021)064006.
Li2017
Li, J., Lin, K. and Wen, H.:
Adv. High Energy Phys. 2017(2017)5234214.
Wang2021
Wang, M. et al.:
Eur. Phys. J. C 81(2021)469.
Jawad2020
Jawad, A. et al.:
Mod. Phys. Lett. A 35(2020)2050298.
Sharif2021
Sharif, M. and Khan, A.:
[arXiv:2109.06010 [gr-qc]].
Guo2013
Guo, G.:
Eur. Phys. J. C 73(2013)2573.
Futterman
Futterman, J.A.H. et al.:
1988 (Cambridge: Cambridge Uni-versity Press) p. 254.
Sanchez1978
Sanchez, N.G.:
Phys. Rev. D 18(1978)1030.
Unruh1976
Unruh, W.G.:
Phys. Rev. D 14(1976)3251-3259.
Crispino2007
Crispino, L.C.B. et al.:
Phys. Rev. D 75(2007)104012.
Jung2004
Jung, E. et al.:
Phys. Lett. B 602(2004)105-111.
Crispino2008
Crispino, L.C.B. and Oliveira, E.S.:
Phys. Rev. D 78(2008)024011.
Songbai Chen2014
Liao, H. et al.:
Phys. Lett. B 728(2014)457-461.
Macedo2013
Macedo, C.F.B. et al.:
Phys. Rev. D 88(2013)064033.
Leite2017
Leite, L.C.S. et al.:
Phys. Lett. B 774(2017)130-134.
Huang2019
Huang, H. et al.:
Gen. Rel. Grav. 51(2019)22.
Huang2015
Huang, H. et al.:
Gen. Rel. Grav. 47(2015)8.
2018
Leite, L.C.S. et al.:
Phys. Rev. D 98(2018)024046.
Anacleto2020
Anacleto, M.A. et al.:
Phys. Lett. B 803(2020)135334.
Magalhaes2020
Magalhães, R.B. et al.:
Eur. Phys. J. C 80(2020)386.
Junior2020
Lima, H.C.D. et al.:
Phys. Lett. B 811(2020)135921.
Benone2018
Benone, C.L. et al.:
Int. J. Mod. Phys. D 27(2018)1843012.
Brill1957
Brill, D.R. and Wheeler, J.A.:
Rev. Mod. Phys. 29(1957)465-479.
Cho2005
Cho, H.T. and Lin, Y.C.:
Class. Quant. Grav. 22(2005)775-790.
Cooper1995
Cooper, F. et al.:
Phys. Rept. 251(1995) 267-385.
Hawking1975
Hawking, S.W.:
Commun. Math. Phys. 43(1975)199-220.
Hawking19761
Hawking, S.W.:
Phys. Rev. D 13(1976)191-197.
Benone104053
Benone, C.L. et al.:
Phys. Rev. D 89(2014)104053.
|
http://arxiv.org/abs/2307.03968v1 | 20230708125450 | Multi-Level Power Series Solution for Large Surface and Volume Electric Field Integral Equation | [
"Y. K. Negi",
"N. Balakrishnan",
"S. M. Rao"
] | cs.CE | [
"cs.CE",
"cs.NA",
"math.NA"
] |
Impact of noise on inverse design: The case of NMR spectra matching
O. Anatole von Lilienfeld
August 12, 2023
===================================================================
In this paper, we propose a new multi-level power series solution method for solving a large surface and volume electric field integral equation-based H-Matrix.
The proposed solution method converges in a fixed number of iterations and is solved at each level of the H-Matrix computation. The solution method
avoids the computation of a full matrix, as it can be solved independently at each level, starting from the leaf level. Solution at each level can be used as
the final solution, thus saving the matrix computation time for full H-Matrix. The paper shows that the leaf level matrix computation and solution with power series gives an accurate results as full H-Matrix iterative solver method. The method results in considerable time and memory savings compared to the H-Matrix iterative solver. Further, the proposed method retains the O(NlogN) solution complexity.
Method of Moments (MoM), H-Matrix, surface electric field integral equation,volume electric field integral equation.
§ INTRODUCTION
With the use of ever increasing higher frequencies for various defence and civilian applications in the current world, the electrical
size of electromagnetic scattering/radiation problem has grown drastically <cit.>. Solving the electrically large problems numerically to obtain fast and
accurate results is the biggest challenge in the Computational Electromagnetics (CEM) community. Also, with the increase in computing power and memory,
the need for large-scale solution algorithms has grown even more. Out of the various numerical methods in CEM, the most popular methods are:
a) the Finite Difference Time Domain (FDTD) <cit.> method in the time domain and b) the Method of Moments (MoM) <cit.> and Finite Element
Method (FEM) <cit.> in the frequency domain. Traditionally, the frequency domain methods have been more popular than the time domain methods
as most of the early experimental results were available in the frequency domain and validating the computational results was convenient and easy.
Out of the various frequency domain methods, MoM based methods are highly accurate and flexible for modeling irregular structures, the MoM matrix
can be computed with the Surface Electric Field Integral Equation (S-EFIE) for solving Perfect Electrical Conductor (PEC) problems with surface mesh, and the
Volume Electric Field Integral Equation (V-EFIE) <cit.> for solving inhomogeneous dielectric problems with volume mesh. Further, the MoM leads
to a smaller number of unknowns compared to FEM and is free from grid dispersion error. However, the MoM matrix is a full matrix compared to a
sparse matrix for the FEM method. Hence, the solution to large size problems with MoM in electromagnetics requires high matrix memory
and computation time due to the dense matrix. Note that MoM dense matrix computation, matrix vector product and storage cost scales to O(N^2 ) for N number of unknowns. Solving the dense matrix with an iterative solver leads to N_itr O(N^2) calculations for N_itr iteration with O(N^2) for matrix-vector multiplication cost. With the direct solver, the complexity grows as O(N^3). Various fast solver algorithms like Multi-Level Fast Multipole Algorithm (MLFMA) <cit.>, Adaptive Integral Method (AIM) <cit.>, FFT <cit.>, IE-QR <cit.>,
and Hierarchical Matrix (H-Matrix) <cit.> have been proposed to overcome the MoM limitations of high memory and computation cost.
Fast solver reduces the matrix memory, matrix fill time, and matrix-vector product time to O(NlogN). The reduced matrix-vector product time
improves the solution time to N_itr O(NlogN) for N_itr iterations with various iterative solution methods like Bi-Conjugate Gradient
(BiCG) or Generalized Minimum Residual (GMRES).
Fast solvers are built on the compressibility property of the far-field interaction matrices. The compression of the far-field matrices can be done
using analytical matrix compression methods like MLFMA or AIM, and also with numerical matrix compression methods like H-Matrix. Compared to
analytical compression methods, numerical compression methods are easy to implement and are kernel independent. All the fast solvers depend on the
iteration count of the iterative solution methods. The convergence of the iterations depends on the condition number of the computed MoM matrix,
and further, for a large number of unknowns, the convergence iteration count also increases. The high iteration count can be mitigated by using various
preconditions like ILUT, Null-Field, and Schur's complement method based preconditioners <cit.>. The matrix preconditioner improves
the condition number of the matrices and reduces the iteration count of the overall matrix solution. Despite the improvement in solution time, the use
of preconditioners comes with the overhead of preconditioner computation time and extra preconditioner solution time for each iteration. Also, for
the solving of a large number of unknowns, the iteration count may still be high.
Recently there has been a trend in the CEM community for the development of an iteration-free fast solver method for solving problems with a large
number of unknowns. Various fast direct solvers <cit.> have been proposed to overcome the iteration dependency of the solution process.
These direct solvers are based on LU decomposition and compression methods. The methods are complex to implement and give quadratic scaling
for complex real-world problems.
In this work, we propose a Multi-Level (ML) fast matrix solution method based on the power series <cit.>. The proposed method exploits the
property of ML matrix compression of the H-Matrix. The matrix is solved for each level using the matrix computation of the leaf level only, and the
matrix solution can be terminated at the desired level as per the required accuracy. Our experimental results show that we get good accuracy even for the
lowest level solution. The method relies on matrix-vector multiplication at each level and using the solution of the lowest level saves matrix computation
time and memory requirement for the overall matrix solution.
The rest of the paper is organized as follows. Section II gives a summary of MoM computation for S-EFIE and V-EFIE, section III covers H-Matrix
computation for S-EFIE and V-EFIE. The derivation of the proposed ML power series solver is given in section IV. The numerical results of the
proposed method, and conclusion are discussed in sections V, and VI.
§ METHOD OF MOMENTS
MoM is a popular and efficient integral equation based method for solving various electromagnetic radiation/scattering problems. MoM can be computed using Electric Field Integral Equation (EFIE) for both surface and volume modeling. Surface modeling can be done using Rao Wilton Glisson (RWG) <cit.> triangle basis function, whereas volume modeling can be done using Schaubert Wilton Glisson (SWG) <cit.> tetrahedral basis function. In the case of dielectric modeling compared to S-EFIE, V-EFIE is an integral equation of the second kind and is more well-conditioned and stable. V-EFIE can model inhomogeneous bodies more efficiently than surface EFIE. In this work, we use RWG basis function for PEC surface S-EFIE modeling and SWG basis function for volume V-EFIE modeling. The surface/volume EFIE governing equation for the conductor/dielectric scattering body illuminated with the incident plane wave is given as the total electric field (E^total) from a scattering surface/volume and is the sum of incident electric field (E^inc) and scattered electric fields (E^scatt).
E^total=E^inc+E^scatt.
The scatted electric field is due to the surface current in PEC surface or volume polarization current in the dielectric media and is given as:
E^scatt=-jωA(r)- ∇ϕ(r).
In the above equation A(r) is the magnetic vector potential and describes radiation of current, ϕ(r) is electric potential and describes associate bound charge. Applying the boundary condition for PEC structure the S-EFIE can be written as:
E^inc=jωA(r)+ ∇ϕ(r).
Similarly, the V-EFIE can be written for a dielectric inhomogeneous body as:
E^inc=D(r)/ϵ(r) + jωA(r) + ∇ϕ(r).
In the above, equation D(r) is the electric flux density and ϵ(r) is the dielectric constant of the scattering volume media. The surface current in equation (3) for PEC structure is expanded with RWG function, and similarly in equation (4) for dielectric volume structure polarization current and charge is modeled with SWG basis function. Performing Galarkin testing over each term with integrating over the surface/volume, the final system of equation boils down to the linear system of the equation as below:
[Z]x=b.
In the above equation, Z is a dense MoM matrix, b is a known incident plane wave, and x is an unknown coefficient to be computed. The dense matrix leads to high cost matrix computation and memory requirement as well as solution time complexity. In the next section, we discuss the implementation of the H-Matrix for the mitigation of high cost of the conventional MoM matrix
§ H-MATRIX
The high cost of MoM limits its application to a few λ problem sizes. This limitation of MoM can be overcome by incorporating fast solvers. Most of the fast solvers work on the principle of compressibility of the far-field matrices. For the implementation of a fast solver, the mesh of geometry is divided into blocks using an oct-tree or binary-tree division process and terminated at the desired level with a limiting edge or face count in each block. The non-far-field interaction blocks at the lowest level are considered near-field blocks and are in the dense matrix form. The compression of the far-field block matrix at each level can be done analytically or numerically. The system of equations in equation (5) can now be written as the sum of near-field and far-field matrix form as:
[Z_N+Z_F]x=b.
In the above equation Z_N is a near-field block matrix and Z_F is far-field compressed block matrices for the MoM fast solver matrix. Numerical compression of far-field matrices is easy to implement and is kernel-independent. A few of the popular fast solvers using numerical compression methods are IE-QR, H-Matrix. In this work, we have implemented H-Matrix for ML matrix compression. For the ML compression computation, the mesh is divided into ML binary tree division-based subgroups. H-Matrix works on the computation of a far-field matrix for the interaction blocks satisfying the admissibility condition given in equation (7). The admissibility condition states that η times the distance between the observation cluster (Ω_t) and source cluster (Ω_s) should be greater or equal to the minimum diameter of the observation cluster or source cluster for far-field computation, where η is the admissibility control parameter, and its value is taken as 1.0.
η dist(Ω_t,Ω_s) ≥ min(diam(Ω_t),diam(Ω_s)).
The far-field matrix block compression is done in such a way that its parent interaction matrix should not be computed at the top level. Matrix compression at each level is carried out using Adaptive Cross Approximation (ACA) <cit.> <cit.> method. The method exploits the rank deficiency property of the far-field matrix blocks. The low-rank sub-block of the far-field Z_sub with m rows and n columns is decomposed into approximate U_(m× k) and V_(k× n) matrices where k is the numerical rank of the low-rank sub-block far-field matrix such that k<<min(m,n). In this work, for memory savings, we only compute half of the H-Matrix <cit.> by making the computation process symmetric, and to maintain the accuracy of the H-Matrix, we use re-compressed ACA <cit.> for far-field block compression. The solution of the iterative solver is iteration count dependent, and further, the convergence iteration count depends on the condition number of the matrix. Also, as the number of unknowns increases, the iterating count for the convergence increases. In the next section, we discuss our proposed method, which is an iteration count and far-field level block independent solution process.
§ MULTI-LEVEL POWER SERIES SOLUTION
The full H-Matrix is a combination of near-field and far-field block matrices. The far-field compressed block matrices are computed for various levels, and in equation (6), the far-field matrix (Z_F) can be further decomposed into the different matrix levels as below:
[Z_F]=[Z_F1]+[Z_F2]+[Z_F3].
In the above equation far-field matrix Z_F1 is for level 1, Z_F2 is for level 2. and, Z_F3 is for level 3. Level 3 forms the leaf level of the binary tree and level 1 as the top level of the tree. Fig. 1. shows the H-Matrix layout for a two-dimension strip. In Fig. 1. light gray boxes represent Z_F1 far-field matrix at level 1, dark gray boxes as Z_F2 is for level 2 and large white boxes as Z_F3 for level 3, the black boxes are the near-field dense matrices. For illustrative purposes, the near-field matrix is a diagonal block form for a two-dimension strip. The real-world problems are three-dimension in structure, giving a non-diagonal block near-field matrix. To implement our ML power series solution method, we must diagonalize the near-field block matrix. The near-field matrix in equation (6) is diagonalized using diagonal scaling coefficient [α], as computed in <cit.> such that the scaled diagonal block near-field matrix can be given as:
[Z̃_N]=[α][Z_N].
Expanding equation (8) and scaling it with the scaling coefficients [α] gives:
[α][Z_N+Z_F1+Z_F2+Z_F3]x=[α]b.
[Z̃_N]x+[α][Z_F1]x+[α][Z_F2]x+[α][Z_F3]x=b̃.
In the above equation b̃ is a [α] scaled vector b and can be further simplified as :
x+ [Z̃_N]^-1[α][Z_F1]x+[Z̃_N]^-1[α][Z_F2]x
+[Z̃_N]^-1[α][Z_F3]x= [Z̃_N]^-1b̃.
Let [Z̃_N]^-1[α][Z_F1]=[U_1], [Z̃_N]^-1[α][Z_F2]=[U_2] and [Z̃_N]^-1[α][Z_F3]=[U_3] equation (12) can further be simplified as
x+ [U_1]x+[U_2]x +[U_3]x= [Z̃_N]^-1b̃.
[I+ U_1]x+[U_2]x +[U_3]x= [Z̃_N]^-1b̃.
x+[I+ U_1]^-1[U_2]x +[I+ U_1]^-1[U_3]x
=[I+ U_1]^-1 [Z̃_N]^-1b̃.
Let [I+ U_1]^-1[U_2]=[V_2] and [I+ U_1]^-1[U_3]
=[V_3] equation (15) can further be simplified as
x+ [V_2]x+[V_3]x = [I+ U_1]^-1 [Z̃_N]^-1b̃.
x+[I+ V_2]^-1[V_3]x=[I+ V_2]^-1[I+ U_1]^-1 [Z̃_N]^-1b̃.
Let [I+V_2 ]^-1 [V_3 ]=[W_3] and equation (17) can be written as
x+[W_3]x=[I+V_2 ]^-1 [I+U_1 ]^-1 [Z̃_N]^-1b̃.
x=[I+W_3 ]^-1 [I+V_2 ]^-1 [I+U_1 ]^-1 [Z̃_N]^-1b̃.
In the above equations [I+W_3 ]^-1,[I+ V_2 ]^-1 and [I+ U_1 ]^-1 can be solved independently at each level using a power series solution method with the expansion as below:
[I+ U_1 ]^-1=[I+ [Z̃_N]^-1[α][Z_F1]]^-1.
[I+V_2 ]^-1=[I+[I+U_1 ]^-1 [U_2 ]]^-1
=[I+[I+ [Z̃_N]^-1[α][Z_F1]]^-1 [Z̃_N]^-1[α][Z_F2]]^-1.
[I+W_3 ]^-1=[I+[I+V_2 ]^-1 [V_3 ]]^-1
=[I+[I+[I+U_1 ]^-1[U_2 ]]^-1[I+U_1 ]^-1[U_3 ]]^-1
=[I+[I+[I+ [Z̃_N]^-1[α][Z_F1]]^-1[Z̃_N]^-1 [α][Z_F2 ]]^-1
[I+[[Z̃_N]^-1 [α][Z_F1]]^-1[Z̃_N]^-1[α][Z_F3 ]]^-1.
From equations (20), (21), and (22), it can be observed that the solution of these equations is dependent on that level and the lower levels of the binary tree block interaction matrix. At each level, the inverse of the matrix system equation can be efficiently computed by using a fast power series solution<cit.>. The fast power series iterative solution converges in two fixed iterations. The solution process only depends on the matrix-vector product of the H-Matrix, thus retaining the complexity of O(NlogN)<cit.>. The ML solution can be computed at the desired level per the required accuracy. Our results show that the solution at the leaf level gives an accurate result leading to time and memory savings.
§ NUMERICAL RESULTS
In this section, we show the accuracy and efficiency of the proposed method. The simulations are carried out on 128 GB memory and an Intel (Xeon E5-2670) processor system for the double-precision data type. The H-Matrix computation is done with the ACA matrix compression error tolerance of 1e-3 <cit.> and solved with GMRES iterative solver with convergence tolerance of 1e-6 <cit.>. For a compressed or dense matrix [Z] if we want to expand [1+Z]^-1 in power series, the necessary and sufficient condition for convergence is |Z|<1 and we choose 0.1 for our simulations <cit.>.The conductor and dielectric geometry with dielectric constant ϵ_r is meshed with an element size less than λ/10 and λ/(10√(ϵ_r)) respectively. To show the accuracy of the proposed method, the RCS results are compared with full H-Matrix iterative solver<cit.>. In the further subsections, we demonstrate the far-field memory and computation time savings along with in solution time saving with our proposed ML power series solution with different examples.
§.§ PEC square plate
To show the accuracy and efficiency on a PEC object in this subsection, we consider a square plate of size 15.0 λ along x and y axis meshed with 67,200 unknown edges. The square plate mesh is divided with binary tree division till level 6. The PEC S-EFIE H-Matrix is solved with ML power series solution method and H-Matrix iterative solver. ML power series converges in 2 iterations, and the iterative solver solution converges in 686. Only the far-field matrix at leaf level 6 is computed for the ML power series solution, ignoring far-field computation from levels 1 to 5 of the binary tree.
Fig. 2. shows the Bi-static RCS of a PEC square plate, and from the Fig., it can be observed that the solution with ML power series solver matches with the H-Matrix iterative solver. Table 1 shows the savings in memory, computation, and solution time of the ML power series solution method as compared with conventional H-Matrix-based iterative solver.
§.§ Dielectric slab
To show the accuracy and efficiency for a considerable size dielectric problem in this subsection, we consider a dielectric slab elongated along the y-axis with a height of 10.0 λ length, 1.0 λ width, and 0.1 λ thickness and dielectric constant (ϵ_r=2.0) meshed with 120,080 tetrahedral faces. The ML power series converges in 2 iterations, and the regular H-Matrix iterative solver converges in 33 iterations.
The dielectric slab mesh is divided with binary tree division till level 10. Only the far-field matrix at leaf level 10 is computed for the ML power series solution. The accuracy of the method for a Bi-static RCS is shown in Fig. 3. Table 2 shows the significant matrix memory, matrix fill and solution time savings of the ML power series solution compared to the conventional H-Matrix-based iterative solver.
§.§ Dielectric hollow cylinder
In this subsection, we consider a dielectric hollow cylinder elongated along the y-axis with a size of 6.0λ length, 0.4λ outer radii, and 0.05λ thickness with a dielectric constant (ϵ_r=2.0), meshed with 158,830 tetrahedral faces. The ML power series converges in 2 iterations, and the H-Matrix iterative solver converges in 24 iterations.
The hollow cylinder mesh is partitioned with a binary tree division till level 8, and for the ML power series solution only the far-field matrix at leaf level 8 is computed. Fig. 4. shows the close match in the bi-static RCS computed using the ML power series method and that with regular H-Matrix iterative solver. Table 3 shows the memory and time saving of the ML power series solution compared to the conventional H-Matrix iterative solver.
§ CONCLUSION
It can be observed from the illustrative examples in the previous sections that our proposed ML power series solution method gives considerable matrix memory, fill and solve time saving for significant size problems. The solution method is as accurate as the H-Matrix iterative solver. The savings may not be substantial for small-size mesh structures. Still, the method will give significant savings for large-size problems taken up for illustration and for complex and sizeable electrical problems like antenna arrays and complex composite structures. Also, the technique is entirely algebraic in nature and can apply to fast analytical solver-based methods like AIM and MLFMA. The matrix block in each level can be computed independently, and the solution of the method only depends on the matrix-vector product of the system matrix. Hence, the proposed method is amenable to efficient parallelization.
ACESJournal
Yoginder Kumar Negi pict/yknegi.jpg
obtained the B.Tech degree in Electronics and Communic-ation Engineering from Guru Gobind Singh Indraprastha University, New Delhi, India, in 2005, M.Tech degree in Microwave Electronics from Delhi University, New Delhi, India, in 2007 and the PhD degree in engineering from Indian Institute of Science (IISc), Bangalore, India, in 2018.
Dr Negi joined Supercomputer Education Research Center (SERC), IISc Bangalore in 2008 as a Scientific Officer. He is currently working as a Senior Scientific Officer in SERC IISc Bangalore. His current research interests include numerical electromagnetics, fast techniques for electromagnetic application, bio-electromagnetics, high-performance computing, and antenna design and analysis.
B. Narayanaswamypict/nbk.jpg
received the B.E. degree (Hons.) in Electronics and Communi-cation from the University of Madras, Chennai, India, in 1972, and the Ph.D. degree from the Indian Institute of Science, Bengaluru, India, in 1979.
He joined the Department of Aerospace Engineering, Indian Institute of Science, as an Assistant Professor, in 1981, where he became a Full Professor in 1991, served as the Associate Director, from 2005 to 2014, and is currently an INSA Senior Scientist at the Supercomputer Education and Research Centre. He has authored over 200 publications in the international journals and international conferences. His current research interests include numerical electromagnetics, high-performance computing and networks, polarimetric radars and aerospace electronic systems, information security, and digital library.
Dr. Narayanaswamy is a fellow of the World Academy of Sciences (TWAS), the National Academy of Science, the Indian Academy of Sciences, the Indian National Academy of Engineering, the National Academy of Sciences, and the Institution of Electronics and Telecommunication Engineers.
Sadasiva M. Rao pict/smr.jpg
obtained his Bachelors, Masters, and Doctoral degrees in electrical engineering from Osmania University, Hyderabad, India, Indian Institute of Science, Bangalore, India, and University of Mississippi, USA, in 1974, 1976, and 1980, respectively. He is well known in the electromagnetic engineering community and included in the Thomson Scientifics Highly Cited Researchers List.
Dr. Rao has been teaching electromagnetic theory, communication systems, electrical circuits, and other related courses at the undergraduate and graduate level for the past 30 years at various institutions. At present, he is working at Naval Research Laboratories, USA. He published/presented over 200 papers in various journals/conferences. He is an elected Fellow of IEEE.
|
http://arxiv.org/abs/2307.04467v1 | 20230710103218 | Deformations at Earth's dayside magnetopause during quasi-radial IMF conditions: Global kinetic simulations and soft X-ray imaging | [
"Zhongwei Yang",
"R. Jarvinen",
"X. C. Guo",
"T. R. Sun",
"D. Koutroumpa",
"G. K. Parks",
"C. Huang",
"B. B. Tang",
"Q. M. Lu",
"C. Wang"
] | physics.space-ph | [
"physics.space-ph"
] |
0000-0002-1509-1529]Zhongwei Yang
State Key Laboratory of Space Weather, National Space Science Center, Chinese Academy of Sciences, Beijing, 100190, People's Republic of China ([email protected])
Finnish Meteorological Institute, FI-00101 Helsinki, Finland
State Key Laboratory of Space Weather, National Space Science Center, Chinese Academy of Sciences, Beijing, 100190, People's Republic of China ([email protected])
State Key Laboratory of Space Weather, National Space Science Center, Chinese Academy of Sciences, Beijing, 100190, People's Republic of China ([email protected])
LATMOS/IPSL, CNRS, UVSQ Université Paris-Saclay, Sorbonne Université, Guyancourt, 78280, France
Space Sciences Laboratory, University of California, Berkeley, California 94720, USA
CAS Engineering Laboratory for Deep Resources Equipment and Technology, Institute of Geology and Geophysics, CAS, Beijing, 100029 China
State Key Laboratory of Space Weather, National Space Science Center, Chinese Academy of Sciences, Beijing, 100190, People's Republic of China ([email protected])
Deep Space Exploration Laboratory/School of Earth and Space Sciences, University of Science and Technology of China, Hefei 230026, China
State Key Laboratory of Space Weather, National Space Science Center, Chinese Academy of Sciences, Beijing, 100190, People's Republic of China ([email protected])
The Solar wind Magnetosphere Ionosphere Link Explorer (SMILE) is an ESA-CAS joint mission. Primary goals are investigating the dynamic response of the Earth's magnetosphere to the solar wind (SW) impact via simultaneous in situ magnetosheath plasma and magnetic field measurements, X-Ray images of the magnetosheath and magnetic cusps, and UV images of global auroral distributions. Magnetopause deformations associated with magnetosheath high speed jets (HSJs) under a quasi-parallel interplanetary magnetic field condition are studied using a three-dimensional (3-D) global hybrid simulation. Soft X-ray intensity calculated based on both physical quantities of solar wind proton and oxygen ions is compared. We obtain key findings concerning deformations at the magnetopause: (1) Magnetopause deformations are highly coherent with the magnetosheath HSJs generated at the quasi-parallel region of the bow shock, (2) X-ray intensities estimated using solar wind H^+ and self-consistent O^7+ ions are consistent with each other, (3) Visual spacecraft are employed to check the discrimination ability for capturing magnetopause deformations on Lunar and polar orbits, respectively. The SMILE spacecraft on the polar orbit could be expected to provide opportunities for capturing the global geometry of the magnetopause in the equatorial plane. A striking point is that SMILE has the potential to capture small-scale magnetopause deformations and magnetosheath transients, such as HSJs, at medium altitudes on its orbit. Simulation results also demonstrate that a lunar based imager (e.g., Environment heliospheric X-ray Imager, LEXI) is expected to observe a localized brightening of the magnetosheath during HSJ events in the meridian plane. These preliminary results might contribute to the pre-studies for the SMILE and LEXI missions by providing qualitative and quantitative soft X-ray estimates of dayside kinetic processes.
§ INTRODUCTION
In-situ spacecraft observations in the near-Earth plasma environment (e.g., MMS, Cluster, Van Allen Probes, THEMIS, Geotail, Double Star) have made important contributions in revealing dynamic and kinetic problems of the solar wind interaction with the Earth's magnetosphere. These observations provide excellent opportunities for understanding the microphysics of collisionless shocks, magnetic reconnection, and wave-particle interactions, as well as cross-scale energy release and dissipation by substorms and turbulence. On the other hand, remote observations of radio emissions, optical light, infrared, extreme ultraviolet (EUV), X-ray, gamma-ray, and energetic neutral atoms are widely used for remote objects. These imaging techniques provide a new way for visualizing global pictures of the Earth's exosphere, plasmasphere, inner magnetosphere, magnetosheath, magnetotail, and the cusp region (e.g., TWINS, IBEX, XMM-Newton), as well as in solar activities and heliospheric structures (e.g., PSP, SO, SDO, IBEX).
Early ROSAT observations of X-ray and EUV emission from comet C/Hyakutake have been reported by <cit.>. Some mechanisms, such as thermal bremsstrahlung associated with hot electrons, possibly due to solar wind interaction effects, are suggested to explain the mechanism of this emission. <cit.> proposed that the solar wind contains a large number of heavy ion species with a range of charge states (e.g., C^6+, O^7+, O^8+). These ions will readily charge transfer with cometary or planet's exospheric neutrals, producing ions that can be highly excited and consequently emit photons in the X-ray and EUV part of the spectrum. This solar wind charge exchange mechanism is abbreviated as SWCX. Thereafter, an empirical formula that depends on the local neutral density, solar wind density, solar wind speed, and charge-exchange cross-section is proposed for the quantitative estimation of the X-ray intensity <cit.>. <cit.> found that a significant positive correlation exists between the solar wind fluxes and the soft X-ray intensity. Furthermore, XMM-Newton observations indicate that the elevated high-valence ion abundance inside a coronal mass ejection (CME), particularly for Ne^9+, Mg^11+, Mg^12+, etc., favors the enhancement of Earth's magnetospheric soft X-ray emissions <cit.>.
SWCX emissions are commonly observed in the heliosphere at comets <cit.>, Earth <cit.>, the Moon <cit.>, Jupiter <cit.>, and Mars <cit.>. <cit.> reviewed observations of soft X-ray emissions have become a powerful tool for panoramic imaging of the planetary magnetosphere and plasma environment. Currently, new and future space missions, specifically designed for X-ray imaging of the vast planetary space weather system, including the ESA/JAXA BepiColombo mission, SMILE ESA-CAS joint mission, NASA STORM missions, CubeSat/small spacecraft missions (e.g., NASA CuPID, JAXA GEO-X), and future lunar-based missions (e.g., NASA LEXI, Chinese Chang'e/SXI), were proposed and successively implemented for studying “charge-exchange," a poorly understood phenomenon that occurs when the solar wind collides with planetary exosphere and neutral gas in the heliosphere.
To support the development of new X-ray missions, numerical simulations are crucial for pre-studies. Several numerical models have been developed to simulate soft X-ray imaging and determine detectability. Empirical models <cit.> have been used to explore X-ray imaging of the solar wind-Mars interaction. Hybrid simulations and test particle calculations have been used to compute contributions from SWCX processes to X-ray emissions from Venus <cit.>. MHD simulations are commonly used to accurately describe the shape of global structures of the terrestrial magnetosphere during its interaction with the solar wind. By combining an empirical exosphere neutral profile with the solar wind flux from MHD simulations, the soft X-ray emission can be estimated using Cravens' formula. Studies have discussed the soft X-ray visibility of the solar storm compressed magnetopause, the cusp region, flank Kelvin-Helmholtz (K-H) waves, and magnetic reconnection associated outflows and flux transfer events (FTEs) in detail <cit.>.
Previous models and simulations have made a lot of achievements in the study of Earth's X-ray emissions. However, it is still an open question whether the soft X-ray excited by moderate-scale dynamic structures in the magnetosheath and magnetosphere is visible. Many cross-scale instantaneous structures have been observed from the foreshock all the way to the magnetopause. Here, we list some structures highly associated with at least ion kinetic behaviors:
1. The foreshock region is filled with ULF waves, shocklets, SLAMS, and other nonlinear structures, and even magnetic reconnections. <cit.> made a patchwork for the foreshock. The foreshock waves and structures have been clearly evidenced in both kinetic simulations <cit.>, and observations <cit.>.
2. Both kinetic simulations and MMS observations reveal that the bow shock not only undergoes back and forth swings under the turbulent solar wind, but also can experience kinetic-scale ripples <cit.> and self-reforming cycles <cit.>.
3. Magnetosheath high speed jets (HSJs) refer to an enhancement in the anti-sunward bulk velocity and dynamic pressure based on the x component of the ion velocity <cit.>. A recent study proposes that a fraction of HSJs is a direct consequence of shock reformation <cit.>, and they may be related to “throat aurora" and corresponding magnetopause distortion <cit.>. 2-D hybrid simulations have been employed to study the HSJ property, size, lifetime, and associated jet-driven bow wave <cit.>. They find that the jets are associated with the porous quasi-parallel shock front, and the scale size of jets can reach about 2.5-5Re.
4. In addition, magnetopause asymmetric reconnection <cit.>, magnetosheath reconnections at both electron and ion scales <cit.>, small-scale current filaments <cit.>, magnetopause K-H waves <cit.>, and associated vortex-induced reconnections <cit.> also play important roles in the cross-scale process and energy conversion during the Earth-solar wind interaction.
Some potential soft X-ray imaging objects, such as solar storms, K-H waves, and FTE events, have been investigated based on global MHD simulations <cit.>. In this study, we focus on HSJs, which are typically observed downstream from the quasi-parallel bow shock, and are highly associated with ion dynamic and kinetic processes in 3-D. Cluster and MMS simultaneous observations provide insight into HSJs. <cit.> found that HSJs are not localized into small regions but could span a region larger than 10Re, especially when the quasi-parallel shock covers the entire dayside magnetosphere under radial interplanetary magnetic field (IMF). The magnetopause can have multiple independent indentation places under the continuous impacts of HSJs. The magnetopause is deformed and can move in opposite directions at different places. It cannot, therefore, be considered as a smooth surface anymore but rather as a surface full of local indents. One striking point is that a large number of observations indicate that long radial IMF events can last from about 3-10 hours <cit.> to 1-2 days <cit.>. Under such long-duration IMF solar wind conditions, the foreshock has enough time to grow and reach a mature state. In this case, the magnetopause around the sunset point may suffer continuous disturbances of HSJs. If so, what the soft X-ray imaging will look like remains to be further simulated and analyzed.
In this paper, we focus on the dynamics of the foreshock and magnetosheath to address three primary questions: (1) How do HSJs continuously affect the magnetopause under radial IMF events? (2) What is the global picture and fate of these jets and their resulting magnetopause indents at different locations in 3D? (3) What is the timescale of the magnetopause response to HSJs, and (4) can it be identified in soft X-ray intensity images by SMILE, LEXI, and other lunar-based missions?
§ SIMULATION MODEL
In this paper, the interaction of the solar wind with Earth's magnetosphere has been simulated by the three-dimensional (3-D) global hybrid simulation platform RHybrid <cit.>.
The model setup includes the undisturbed, upstream solar wind ions injected in the simulation from the front (+x) wall along the -x direction with a drifting Maxwellian velocity distribution. Within the simulation domain, ion velocity distributions evolve according to model calculation self-consistently coupled with the evolution of the magnetic field. The perpendicular components of the undisturbed IMF to the flow (B_y, B_z) convect in the simulation domain frozen-in to the solar wind plasma, whereas the radial, flow-aligned component (B_x) is implemented as a constant magnetic field profile. Earth's magnetic field is estimated as a 3-D dipole field <cit.> instead of a mirror dipole <cit.>. Magnetospheric solar wind interaction, including different regions such as the dayside and nightside magnetosphere, magnetosheath and the foreshock and the boundaries like the bow shock and the magnetopause, forms self-consistently when the magnetized solar wind plasma flow encounters the geomagnetic field and the planetary environment. Electrons are modeled as a charge-neutralizing adiabatic fluid. The inner boundary is assumed at the geocenter distance of r = 3R_E. It is implemented as a perfect conducting sphere on which precipitated particles are absorbed.
Usually, the simulated Earth radius R_E is typically reduced to an order of 10 d_i0 (the upstream solar wind ion inertial length) in order to ensure the appearance of an Earth-like magnetosphere <cit.> and to save considerable computational costs in global hybrid simulations. In this paper, the value of R_E=1200 km. A set of solar wind parameters <cit.> are used to mimic the space environment for the SMILE mission, which is scheduled to launch in 2025 during solar maximum. The solar wind ions consist of protons with a bulk velocity of 450 km/s and a number density of 7 cm^-3, and highly ionized minor species O^7+. The IMF magnitude is 10 nT, corresponding to a solar wind ion gyro-frequency of Ω_0=0.958≈1s^-1. The magnetic Reynolds number is set to 1.2×10^7. Uniform grid cells with a size of Δ x=Δ y=Δ z=0.08 R_E are used throughout the box. The cell dimensions are chosen as n_x× n_y× n_z=500×600×600. A total of about ten billion particles are used. A typical time step is Δ t=0.01 s. More details of parameter setups are summarized in Table 1.
§ SIMULATION RESULTS
§.§ Quasi-parallel IMF condition
Under radial solar wind conditions, there are at least two initiation methods for simulating the interaction between solar wind and planetary magnetospheres. The first involves introducing a dipole field within the IMF, allowing the dipole field strength to gradually diminish with increasing distance from the Earth's core, transitioning to a purely solar wind magnetic field <cit.>. The second method employs a mirrored dipole field approach on the sunward side, such as placing a mirrored dipole at x=+30 Earth radii, superimposing its magnetic field with the original field, and ultimately replacing the magnetic field in the upstream region at x=+15 Earth radii with a purely solar wind magnetic field <cit.>. Both of the initial configurations yield satisfactory magnetospheric morphologies and are extensively utilized in astrophysical and space physics simulations for Hermean-type planetary magnetospheres. Without loss of generality, this study adopts the first method for initiating the simulation.
Figure 1 illustrates the interaction between the solar wind and the Earth's magnetosphere under radial IMF conditions, as well as the formation process of the bow shock. Panels from left to right represent snapshots of the number density of H^+ ions at different times t=3s, 70s, 220s, and 235s, respectively. Figure 1a shows the Earth's dipole field at the beginning of the simulation. The magnetosphere begins to expand and gradually form. At later time (Figure 1b), the Earth's magnetosphere including key structures (e.g., the magnetopause, the magnetosheath, cusp regions, and the bow shock), has been initially formed. A fraction of the incident solar wind ions are reflected at the bow shock front. These back-streaming ions interact with the freshly incident solar wind, causing low-frequency waves in the foreshock region. In Figure 1c-d, the foreshock has reached a mature form under the long-term radial solar wind conditions.
Yellow arrows in Figure 1c-d indicate that the foreshock low-frequency wave structure can undergo nonlinear evolution and become steep when approaching the Earth's bow shock. These nonlinear structures have been widely observed at quasi-parallel shocks <cit.>. Previous local simulations clearly revealed that such steepening foreshock transients are associated with magnetosheath dynamic structures, such as HSJs <cit.>. Both of spacecraft observations <cit.> and global simulations <cit.> clearly evidenced that HSJs generated in the immediate downstream of quasi-parallel shock can lead to magnetopause indents.
In the next section, we will take HSJs as an example to study magnetopause deformations (including indentation and protrusion).
§.§ Magnetopause deformation
Figure 2a (at t=235s) shows that the magnetopause is being compressed inward by the magnetosheath transients, forming a concave shape as indicated by the white arrow in the enlarged view (Figure 1d). A standard streamline method (LIC) is adopted to display the magnetic field lines of the forced magnetosphere. The magnetic field lines in the concave magnetopause region are locally bent inward, rather than moving as a whole. This is a crucial and can be different from the mechanism of magnetopause indents compressed by CME-driven shocks in a large-scale. Figures 2b and c are snapshots of the magnetopause at non-concave and concave times (t=160s and 235s), respectively. Figure 2c depict an zoom-in view of the region where the magnetopause is affected HSJs. From Figure 2c, it can be clearly seen that there is a strong HSJ in the dayside direction of the concave region of the magnetopause. In comparison, the magnetopause without the HSJ impact (Figure 2b, t=160s) is relatively quiet and no obvious deformation as that at t=235s. It is interesting to note that there is a HSJ located near z=-6R_E, where the magnetopause is slightly concave due to the influence of the HSJ. However, this HSJ is off the dayside and exists in the magnetosheath region at a relative high latitude. The plasma flow in the local magnetosheath has begun to deflect, so the HSJ does not form a strong hit on the magnetopause like it did at the dayside. In summary, the foreshock continuously generates dynamic magnetosheath transients such as HSJs, which can cause deformation of the magnetopause. <cit.> used local simulations to statistically analyze the evolution of various transient structures such as HSJ, transient flux enhancements, and high speed plasmoids downstream of quasi-parallel shocks. The formation mechanism will not be further elaborated here, and this paper only focuses on the magnetopause deformation caused by such transient structures in response to soft X-ray imaging. Nevertheless, it is still worth mentioning that, unlike the planar shock in hybrid and PIC simulations <cit.>, global simulations indicate that the solar wind is deflected around the Earth's magnetosphere in the magnetosheath. The transients are more likely impact the magnetopause about the subsolar point.
Figures 3a-c represent the time-evolution of ion number densities log_10 N sampled at different locations: A, B and C (denoted in Figure 2). To understand characteristics of the magnetopause depressions, Figures 3d-f show the time series of x-directional dynamic pressure P_dx, temperature T, and X-ray intensity P corresponding to the sampling location C. This dynamic evolution process cannot be reflected by shock and magnetopause empirical models. High-resolution data from Magnetospheric MultiScale (MMS) have been continuously released, and the kinetic processes of shock front rippling and self-reformation have been successively confirmed. These mechanisms may result in variations to the location and configuration of the bow shock, and most of the changes are concentrated on the ion scale. <cit.> show in situ evidence of HSJs generated at the Earth's bow shock as a direct consequence of shock self-reformation. In this paper, from a global simulation perspective, we trace the evolution of various regions from the foreshock to the magnetopause at a relatively large scale (in an order of ∼ R_E). Figure 3d shows that magnetopause depressions are usually accompanied by an increase in the dynamic pressure P_dx in the magnetosheath. Under the impact of the HSJs, the ion temperature inside the magnetopause does not significantly change (Figure 3e). In Figure 3f, one striking point is that the dynamic pressure can locally enhance the X-ray intensity within the magnetosheath ahead of the magnetopause. Furthermore, Figures 3b-c show that the magnetopause at Z=0 and Z=-2.5R_E exhibited earthward indentation at about t=270s. This is mainly due to the dragging of magnetic field lines caused by the HSJ impacting the magnetopause near the subsolar point.
In summary, magnetopause depressions caused by magnetosheath transients could last 20-50 seconds. This will be more advantageous for the X-ray imaging. Of course, the quality of imaging also depends on many factors, such as the counts of X-ray photons, the field of view (FOV) at a certain orbit, the spatial and temporal resolutions, the exposure time, and the background noise. The estimation of X-ray imaging considering all factors mentioned above is beyond the scope of this paper and depend on the final parameters of the SMILE/SXI. The motivation of this work is to suggest more potential kinetic processes and structural objects that could be observed by soft X-ray instruments in the future. The soft X-ray calculated from the sampling area (Figure 3f) suggests that magnetopause deformations imaged by soft X-rays instrument can be possible. In the next subsection, we will further study the three-dimensional soft X-ray imaging of magnetosheath transients and magnetopause indents from perspectives of local intensities and line-of-sight (LOS) integrations.
§.§ Soft-X ray imaging
In this study, the X-ray intensity of the geocoronal SWCX emission for a particular line-of-sight (LOS) I can be estimated by the line integration of volume emission rate (P) as in previous investigations <cit.>.
I=1/4π∫Pdr=1/4π∫α n_Hn_swV_eff dr (keV cm^-2 s^-1 sr^-1) Eq.(1)
where n_H and n_sw are number densities of exospheric hydrogen and solar wind proton, respectively. The effective collision speed is estimated by the solar wind velocity V_sw and thermal speed V_th as V_eff=√(V_sw^2+V_th^2) in Eq. (1). It is important to note that protons do not produce soft X-rays. Instead, heavy solar wind ions such as C^5+, C^6+, Ne^9+, O^7+, O^8+, Mg^11+, Mg^12+ emit soft X-rays through the SWCX <cit.>. For instance, the interaction O^7++H→ O^6+*+H^+ in which an electron is transferred from an exospheric hydrogen to an solar wind oxygen ion, leaves the oxygen ion an excited state. The ion then emits a photon when it decays to a lower energy state and thus may lead to the satellite detection of soft X-rays. The heavy ions in the solar wind have a very small proportion and are reasonably considered to be test particles in previous MHD and hybrid simulations. Typically, the X-ray intensity emitted by heavy ions is estimated by combing the proton parameters and an interaction efficiency factor. In this paper, our simulation are performed in the presence of self-consistent solar wind H^+ and O^7+ ions, which allows us to independently estimate X-ray intensities based on O^7+ or H^+ ions, respectively. By applying the interaction efficiency factor (α), the proton-based value in Equation (1) is converted into the soft X-ray emissivity generated by the source ions. Cravens (2000) gave a rough estimate of α encompassing all the detailed atomic physics. Based on summarized parameter lists <cit.>, we use an interaction efficiency factor value of α=10^-15 (eV cm^2) under a solar wind speed about 450 (km/s), following previous simulations <cit.> and reference therein. Although this setup is widely used, it is worthy to note that the value of α is quite uncertain and depends on solar wind conditions <cit.>. An analytical model from <cit.> for the neutral density, given as
n_H=25(cm^-3)(10R_E/r)^3 Eq.(2)
where r is the distance of the considered location to the Earth's center. The X-ray intensity P of heavy ions, e.g., O^7+ also can be estimated, following <cit.> and reference therein.
P=σ_sw[O^7+/O][O/H]F_swn_H (keV cm^-3 s^-1) Eq.(3)
where σ_sw is the charge-exchange cross section that depends on the solar wind species and charge state. Parameters σ_sw=12×10^-15 (cm^2), [O^7+/O]=0.2 and [O/H]=4.76×10^-6 are adopted after previous studies <cit.> to simulate the solar wind conditions during the solar maximum period when soft X-ray missions will be launched.
In Figures 4a,b, we present X-ray intensity profiles P in the meridional plane at t=160s and t=235s, respectively. The envelope of the magnetopause is indicated by a dashed curve. In conjunction with Figures 2b,c, we find that if there is no HSJ impact on the magnetopause in the subsolar magnetosheath region (e.g., at t=160s), the magnetopause maintains a relatively smooth shape. When an HSJ is observed in the magnetosheath (e.g., at t=235s), there is an enhancement of X-ray intensity in the magnetosheath and a noticeable inward indentation of the magnetopause (indicated by arrows). Figure 4c is a similar plot to Figure 4b but estimated based on heavy ion O^7+ data instead of proton data by Eq.3. Similarly, we can see significant deformation of the magnetopause, and the dynamical process and qualitative conclusions are almost the same as those estimated by solar wind protons. It implies that the estimation of X-ray intensity using solar wind proton data is a fine approximation in previous works. Bottom panels of Figure 4 show corresponding LOS integration values of I calculated by Eq.1. from a dawn-side view. The dashed and solid curves indicate the variation of the magnetopause location without (Figure 4a) and with (Figure 4b-c) indents caused by magnetosheath HSJs. Furthermore, a localized enhancement of LOS integrated intensity I is visible in the magnetosheath ahead of the magnetopause indents. It is expected to capture such localized brightening events by soft X-ray instruments from a dawnside-or duskside view on the Lunar orbit (e.g., NASA LEXI mission).
For a wide field-of-view, the Soft X-ray Imager (SXI) onboard SMILE uses lightweight micropore optics that provide high angular resolution (i.e., ∼0.1^∘) for the 0.15-2.5 keV energy band <cit.>. To obtain good X-ray counts, SMILE/SXI is expected to achieve at least about 1.5^∘ angular resolution near the dayside magnetopause <cit.>. SXI has a field of view (FOV) of approximately 16^∘×27^∘, and its line of sight forms a fixed angle with the UVI payload pointing towards the polar region and points towards the subsolar magnetopause. We have preliminarily calculated the profile of the LOS X-ray intensity integral value I within the FOV on SMILE's possible orbit, to study the possibility of SMILE imaging the deformation of the magnetopause caused by dynamic structures such as magnetosheath HSJs.
In Figure 5, the left panel shows a 3-D volume rendering sketch of the X-ray intensity. Key regions such as the cusp, magnetopause (MP), magnetosheath (MS), bow shock (BS), and the foreshock region are marked in the sketch. The field of view (FOV) of the SMILE spacecraft (SC) is also roughly denoted in yellow for reference. The motivation of this study is to find the best/potential location on the SMILE orbit for the imaging of magnetosheath transients (e.g., local dynamic pressure enhancements represented by HSJs) and magnetopause deformations. First, we use hybrid simulation to obtain 3-D intensity profiles at two fixed times (A: at 160s and B: at 235s); then, we calculate the LOS-integrated X-ray intensity I for these two fixed profiles observed by visual SC at different locations on the SMILE orbit. The selected locations of SC are shown in Figures 5c and 5f (an animation of this Figure is available for a full one-day orbit). When the spacecraft is located at [2.0,8.6,9.3]R_E (Figure 5c), the calculated LOS I images for the two fixed profiles A and B are shown in Figure 5a and 5b, respectively. The black rectangular area in Figure 5 encircles the field of view of SC in the θ and ϕ space. The black dashed curve describes the envelope from the cusp all the way to the magnetopause. By comparing the region where HSJs impact the magnetopause (the area marked by white circles), it is clearly evidenced that the shape of the magnetopause impacted by HSJs has undergone significant indentation and is accompanied by local X-ray brightening. This conclusion is very interesting and can at least indicate that under the current height and spacecraft attitude orientation conditions shown in Figure 5c, it is very likely to capture the magnetosheath solar wind-magnetopause coupling process. Furthermore, in Figures 5d and 5e, we also calculated the LOS X-ray for imaging the magnetopause from a bird's-eye view under spacecraft apogee conditions on orbit. At the apogee, it is good for the SC to capture the entire geometry of the magnetopause, but LOS-integration effects may make it difficult to identify magnetosheath HSJs or local concavity and convexity of magnetopause in a smaller dynamic or kinetic scales. In summary, it means that at different locations on the SC orbit, there are advantages and disadvantages in imaging dynamic structures, magnetopause deformation, and overall geometries of the whole magnetopause.
§ CONCLUSIONS AND DISCUSSIONS
This article mainly uses 3D global hybrid simulation to study the dynamics of the Earth's bow shock and magnetosphere under radial IMF conditions and conducts soft X-ray imaging tests. The main conclusions are:
(1) Under radial solar wind conditions, the subsolar magnetosheath falls downstream of the quasi-parallel shock. Here, a large number of HSJs have been observed, which is consistent with previous Cluster and MMS statistical observations <cit.>. In addition, HSJs have a good correspondence with ULF steepening transients of the foreshock.
(2) The simulation in this article not only reproduces that the spatial size of HSJs can reach the order of magnitude of the Earth's radius, which is consistent with previous global simulations <cit.>. Moreover, it is also found that HSJs can last for seconds to minutes at the subsolar point and impact the magnetopause to form a depression.
(3) We analyzed the discrimination ability of different spacecraft positions on local deformation of the magnetopause at an approximate lunar orbit of 60R_E and SMILE's possible orbit. The main conclusion is that the LOS X-ray imaging observed on the lunar orbit (such as LEXI) has a good ability to identify the magnetopause deformation within the meridian plane; polar orbit spacecrafts (such as SMILE) have advantages in imaging the overall geometry of the magnetopause within the equatorial plane at its apogee. One striking point is that SMILE may have the potential ability to capture small-scale transient structures (e.g., HSJs) at low altitudes around the magnetopause.
In the near future, we need to consider the background noise, different IMF conditions, the asymmetric exosphere neutral hydrogen profile, and solar wind structures, e.g., CME, TD, RD, CS <cit.> on the soft X-ray imaging. The main goal is to provide pre-studies as much as we can to serve the data analysis for the future soft X-ray space missions around 2025 during the solar maximum.
Authors are grateful to Daniel Weimer from Virginia Tech, Urs Ganse and Yann Kempf from University of Helsinki, San Lu from USTC, and Chuanfei Dong from Boston University for helpful discussions. This work was supported by the National Key R&D program of China No.2021YFA0718600, NNFSC grants 42188101, and 42274210, and the Specialized Research Fund for State Key Laboratories of China. The computations are performed by Numerical Forecast Modeling R&D and VR System of State Key Laboratory of Space Weather, and HPC of Chinese Meridian Project using the RHybrid code distributed under the open source GPL v3 license by the Finnish Meteorological Institute (github.com/fmihpc/rhybrid).
cccccccccccc
1
GLobal hybrid model setups and solar wind conditions
Parameters Value
Number of grid cells (n_x× n_y× n_z) 500×600×600
Grid cell size (Δ x) (100 km)^3=(R_E/12)^3
Time step (Δ t) 10 ms
SW bulk velocity vector [V_x, V_y, V_z] [-450, 0, 0]km/s
SW H^+ density 7 cm^-3
SW O^7+ density 10^-4 cm^-3
SW H^+ temperature 15×10^4 K
SW O^7+ temperature 15×10^4 K
SW e^- temperature 15×10^4 K
IMF vector [B_x, B_y, B_z] [-9.96, 0.6, 0.6]nT
IMF spiral angle 4^∘ (away sector)
IMF magnitude 10nT
Alfvén Mach number 5.46
Magnetosonic Mach number 4.79
Dipole strength B_0 at the equator on the surface 4.5μ T
|
http://arxiv.org/abs/2307.04630v1 | 20230710151517 | The NPU-MSXF Speech-to-Speech Translation System for IWSLT 2023 Speech-to-Speech Translation Task | [
"Kun Song",
"Yi lei",
"Peikun Chen",
"Yiqing Cao",
"Kun Wei",
"Yongmao Zhang",
"Lei Xie",
"Ning Jiang",
"Guoqing Zhao"
] | cs.SD | [
"cs.SD",
"eess.AS"
] |
Kibble-Zurek Mechanism for Nonequilibrium Generation
of Magnetic Monopoles in Spin Ices
Gia-Wei Chern
August 12, 2023
==========================================================================================
^*Lei Xie is the corresponding author.
This paper describes the NPU-MSXF system for the IWSLT 2023 speech-to-speech translation (S2ST) task which aims to translate from English speech of multi-source to Chinese speech. The system is built in a cascaded manner consisting of automatic speech recognition (ASR), machine translation (MT), and text-to-speech (TTS). We make tremendous efforts to handle the challenging multi-source input. Specifically, to improve the robustness to multi-source speech input, we adopt various data augmentation strategies and a ROVER-based score fusion on multiple ASR model outputs. To better handle the noisy ASR transcripts, we introduce a three-stage fine-tuning strategy to improve translation accuracy. Finally, we build a TTS model with high naturalness and sound quality, which leverages a two-stage framework, using network bottleneck features as a robust intermediate representation for speaker timbre and linguistic content disentanglement. Based on the two-stage framework, pre-trained speaker embedding is leveraged as a condition to transfer the speaker timbre in the source English speech to the translated Chinese speech. Experimental results show that our system has high translation accuracy, speech naturalness, sound quality, and speaker similarity. Moreover, it shows good robustness to multi-source data. [Our submitted system ranks 1st in the S2ST task.]
§ INTRODUCTION
In this paper, we describe NPU-MSXF team's cascaded speech-to-speech translation (S2ST) system submitted to the speech-to-speech (S2S) track[<https://iwslt.org/2023/s2s>] of the IWSLT 2023 evaluation campaign. The S2S track aims to build an offline system that realizes speech-to-speech translation from English to Chinese. Particularly, the track allows the use of large-scale data, including the data provided in this track as well as all training data from the offline track[<https://iwslt.org/2023/offline>] on speech-to-text translation task. Challengingly, the test set contains multi-source speech data, covering a variety of acoustic conditions and speaking styles, designed to examine the robustness of the S2ST system. Moreover, speaker identities conveyed in the diverse multi-source speech test data are unseen during training, which is called zero-shot S2ST and better meets the demands of real-world applications.
Current mainstream S2ST models usually include cascaded and end-to-end systems. Cascaded S2ST systems, widely used in the speech-to-speech translation task <cit.>, usually contain three modules, i.e. automatic speech recognition (ASR), machine translation (MT), and text-to-speech (TTS). Meanwhile, end-to-end (E2E) S2ST systems <cit.> have recently come to the stage by integrating the above modules into a unified model for directly synthesizing target language speech translated from the source language. E2E S2ST systems can effectively simplify the overall pipeline and alleviate possible error propagation. Cascaded S2ST systems may also alleviate the error propagation problem by leveraging the ASR outputs for MT model fine-tuning. Meanwhile, thanks to the individual training process of sub-modules, cascaded systems can make better use of large-scale text and speech data, which can significantly promote the performance of each module.
In this paper, we build a cascaded S2ST system aiming at English-to-Chinese speech translation with preserving the speaker timbre of the source English speech. The proposed system consists of Conformer-based <cit.> ASR models, a pretrain-finetune schema-based MT model <cit.>, and a VITS-based TTS model <cit.>. For ASR, model fusion and data augmentation strategies are adopted to improve the recognition accuracy and generalization ability of ASR with multi-source input. For MT, we use a three-stage fine-tuning process to adapt the translation model to better facilitate the output of ASR. Meanwhile, back translation and multi-fold verification strategies are adopted. Our TTS module is composed of a text-to-BN stage and a BN-to-speech stage, where speaker-independent neural bottleneck (BN) features are utilized as an intermediate representation bridging the two stages. Specifically, the BN-to-speech module, conditioned on speaker embedding extracted from the source speech, is to synthesize target language speech with preserving the speaker timbre. Combined with a pre-trained speaker encoder to provide speaker embeddings, the TTS model can be generalized to unseen speakers, who are not involved in the training process. Experimental results demonstrate the proposed S2ST system achieves good speech intelligibility, naturalness, sound quality, and speaker similarity.
§ AUTOMATIC SPEECH RECOGNITION
Our ASR module employs multiple models for score fusion in the inference. Moreover, data augmentation is adopted during training to handle noisy multi-source speech.
§.§ Model Structure
Our system employs both Conformer <cit.> and E-Branchformer models <cit.> in our ASR module to address the diversity of the test set. Conformer sequentially combines convolution, self-attention, and feed-forward layers. The self-attention module serves to capture global contextual information from the input speech, while the convolution layer focuses on extracting local correlations. This model has demonstrated remarkable performance in ASR tasks with the ability to capture local and global information from input speech signals. E-Branchformer uses dedicated branches of convolution and self-attention based on the Conformer and applies efficient merging methods, in addition to stacking point-wise modules. E-Branchformer achieves state-of-the-art results in ASR.
§.§ Data Augmentation
Considering the diversity of the testing data, we leverage a variety of data augmentation strategies to expand the training data of our ASR system, including the following aspects.
* Speed Perturbation: We notice that the testing set contains spontaneous speech such as conversations with various speaking speeds. So speed perturbation is adopted to improve the generalization ability of the proposed model. Speed perturbation is the process of changing the speed of an audio signal while preserving other information (including pitch) in the audio. We perturb the audio speech with a speed factor of 0.9, 1.0, and 1.1 to all the training data. Here speed factor refers to the ratio compared to the original speed of speech.
* Pitch Shifting: Pitch shifting can effectively vary the speaker identities to increase data diversity. Specifically, we use SoX[<https://sox.sourceforge.net/>] audio manipulation tool to perturb the pitch in the range [-40, 40].
* Noise Augmentation: There are many cases with heavy background noise in the test set, including interfering speakers and music. However, the data set provided by the organizer is much cleaner than the test set, which makes it necessary to augment the training data by adding noises to improve the recognition performance. Since there is no noise set available, we create a noise set from the data provided. A statistical VAD <cit.> is used to cut the non-vocal and vocal segments from the data and the non-vocal segments with energy beyond a threshold comprise our noise set. We add the noise segments to the speech utterances with a signal-to-noise ratio ranging from 0 to 15 dB.
* Audio Codec: Considering the test data come from multiple sources, we further adopt audio codec augmentation to the training data. Specifically, we use the FFmpeg[<https://ffmpeg.org/>] tool to convert the original audio to Opus format at [48, 96, 256] Kbps.
* Spectrum Augmentation:
To prevent the ASR model from over-fitting, we apply the SpecAugment method <cit.> to the input features during every mini-batch training. SpecAugment includes time warping, frequency channel masking, and time step masking, and we utilize all of these techniques during training.
§.§ Model Fusion
Since a single ASR model may overfit to a specific optimization direction during training, it cannot guarantee good recognition accuracy for the speech of various data distributions. To let the ASR model generalize better to the multi-source input, we adopt a model fusion strategy.
Specifically, we train the Conformer and E-branchformer models introduced in Section 2.1 using the combination of the original and the augmented data. Each testing utterance is then transcribed by these different models, resulting in multiple outputs. Finally, ROVER <cit.> is adopted to align and vote with equal weights on the multiple outputs, resulting in the final ASR output.
§.§ ASR Output Post-processing
Given that the spontaneous speech in the test set contains frequent filler words such as "Uh" and "you know", it is necessary to address their impact on subsequent MT accuracy and TTS systems that rely on the ASR output. To mitigate this issue, we use a simple rule-based post-processing step to detect and eliminate these expressions from the ASR output. By doing so, we improve the accuracy of the downstream modules.
§ MACHINE TRANSLATION
For the MT module, we first use a pre-trained language model as a basis for initialization and then employ various methods to further enhance translation accuracy.
§.§ Pre-trained Language Model
As pre-trained language models are considered part of the training data in the offline track and can be used in the S2ST track, we use the pre-trained mBART50 model for initializing our MT module. mBART50 <cit.> is a multilingual BART <cit.> model with 12 layers of encoder and decoder, which we believe will provide a solid basis for improving translation accuracy.
§.§ Three-stage Fine-tuning Based on Curriculum Learning
We perform fine-tuning on the pre-trained model to match the English-to-Chinese (En2Zh) translation task. There are substantial differences between the ASR outputs and the texts of MT data. First, ASR prediction results inevitably contain errors. Second, ASR outputs are normalized text without punctuation.
Therefore, directly fine-tuning the pre-trained model with the MT data will cause a mismatch problem with the ASR output during inference. On the other hand, fine-tuning the model with the ASR outputs will cause difficulty in model coverage because of the difference between the ASR outputs and the MT data. Therefore, based on Curriculum Learning <cit.>, we adopt a three-stage fine-tuning strategy to mitigate such a mismatch.
* Fine-tuning using the MT data: First, we use all the MT data to fine-tune the pre-trained model to improve the accuracy of the model in the En2Zh translation task.
* Fine-tuning using the MT data in ASR transcription format: Second, we convert the English text in the MT data into the ASR transcription format. Then, we fine-tune the MT model using the converted data, which is closer to the actual text than the ASR recognition output. This approach can enhance the stability of the fine-tuning process, minimize the impact of ASR recognition issues on the translation model, and improve the model's ability to learn punctuation, thereby enhancing its robustness.
* Fine-tuning using the ASR outputs: Third, we leverage GigaSpeech <cit.> to address the mismatch problem between the ASR outputs and the MT data. Specifically, we use the ASR module to transcribe the GigaSpeech training set and replace the corresponding transcriptions in GigaST <cit.> with the ASR transcriptions for translation model fine-tuning. This enables the MT model to adapt to ASR errors.
§.§ Back Translation
Following <cit.>, we adopt the back translation method to enhance the data and improve the robustness and generalization of the model. First, we train a Zh2En MT model to translate Chinese to English, using the same method employed for the En2Zh MT module. Next, we generate the corresponding English translations for the Chinese text of the translation data. Finally, we combine the back translation parallel corpus pairs with the real parallel pairs and train the MT model.
§.§ Cross-validation
We use 5-fold cross-validation <cit.> to improve the robustness of translation and reduce over-fitting. Firstly, we randomly divide the data into five equal parts and train five models on different datasets by using one of them as the validation set each time and combining the remaining four as the training set. After that, we integrate the predicted probability distributions from these five models to obtain the final predicted probability distribution for the next word during token generation for predicting the translation results.
§ TEXT-TO-SPEECH
§.§ Overview
Figure <ref> (a) shows the pipeline of the text-to-speech module in the proposed S2ST system. The TTS module is built on a BN-based two-stage architecture, which consists of a text-to-BN and a BN-to-speech procedure. The text-to-BN stage tends to generate BN features from the Chinese text translated by the MT module. The BN-to-speech stage produces 16KHz Chinese speech from the BN feature, conditioning on the speaker embedding of source speech. Given the translated Chinese speech which preserves the speaker timbre in the source English speech, an audio super-resolution model is further leveraged to convert the synthesized speech from 16KHz to 24KHz for higher speech fidelity.
Building on the two-stage framework AdaVITS <cit.>, we employ bottleneck (BN) features as the intermediate representations in the two-stage TTS module. BN features, extracted from a multi-condition trained noise-robust ASR system, mainly represent the speaker-independent linguistic content. So BN can effectively disentangle the speaker timbre and the linguistic content information.
In the text-to-BN stage, high-quality TTS data is adopted in the training phase to model the speaker-independent BN features with prosody information. In the BN-to-speech stage, both high-quality TTS data and low-quality ASR data should be involved during training to sufficiently model the speech of various speaker identities. Extracted from speech, BN features contain the duration and prosody information, which eliminates the need for text transcripts and prosody modeling. Instead, the BN-to-speech stage focuses on time-invariant information modeling, such as speaker timbre.
As the goal of this work is to conduct zero-shot English-to-Chinese speech translation, we concentrate on the method to transfer the unseen speaker timbre of the source English speech to the synthesized Chinese speech through voice cloning <cit.>. To capture new speaker timbre during inference, the TTS module requires to model abundant various speakers during training, which relies on large-scale high-quality TTS data.
Unfortunately, we are limited in the high-quality TTS data we can use in this task and must rely on additional data such as ASR to model the speaker timbre. However, this data is not suitable for TTS model training because the labels are inconsistent with TTS, and the prosody of the speakers is not as good as high-quality TTS data.
Furthermore, we incorporate ASR data into the BN-to-speech training procedure by re-sampling all the training speech to 16kHz, which can not reach high-quality audio. Therefore, we utilize audio super-resolution techniques to upsample the synthesized 16KHz audio and convert it into higher sampling rate audio.
§.§ Text-to-BN
Our text-to-BN stage network in TTS is based on DelightfulTTS <cit.>, which employs a Conformer-based encoder, decoder, and a variance adapter for modeling duration and prosody. The model extends phoneme-level linguistic features to frame-level to guarantee the clarity and naturalness of speech in our system.
§.§ BN-to-speech
We build the BN-to-speech model based on VITS <cit.>, which is a mainstream end-to-end TTS model. VITS generates speech waveforms directly from the input textual information, rather than a conventional pipeline of using the combination of an acoustic model and a neural vocoder.
The network of the BN-to-speech stage consists of a BN encoder, posterior encoder, decoder, flow, and speaker encoder. The monotonic alignment search (MAS) from the original VITS is removed since BN features contain the duration information. For achieving zero-shot voice cloning, an ECAPA-TDNN <cit.> speaker encoder is pre-trained to provide the speaker embedding as the condition of the synthesized speech. To avoid periodic signal prediction errors in the original HiFiGAN-based <cit.> decoder in VITS, which induces sound quality degradation, we follow VISinger2 <cit.> to adopt a decoder with the sine excitation signals. Since the VISinger2 decoder requires pitch information as input, we utilize a pitch predictor with a multi-layer Conv1D that predicts the speaker-dependent pitch from BN and speaker embedding. With the desired speaker embedding and corresponding BN features, the BN-to-speech module produces Chinese speech in the target timbre.
§.§ Audio Super-resolution
Following <cit.>, we use an upsampling network based vocoder to achieve audio super-resolution (16kHz→24kHz). During training, the 16KHz mel-spectrogram is used as the condition to predict the 24KHz audio in the audio super-resolution model. Specifically, we adopt the AISHELL-3 <cit.> dataset, composing the paired 16KHz and 24KHz speech data for model training. During inference, the high-quality 24kHz speech is produced for the mel-spectrogram of the 16KHz speech generated by the BN-to-speech model. Here DSPGAN <cit.> is adopted as our audio super-resolution model, which is a universal vocoder that ensures robustness and good sound quality without periodic signal errors.
§ DATA PREPARATION
§.§ Datasets
Following the constraint of data usage, the training dataset for the S2ST system is illustrated in Table <ref>.
<https://github.com/SpeechTranslation/GigaS2S>
§.§.§ ASR Data
For the English ASR module in our proposed system, we use GigaSpeech, LibriSpeech, TED-LIUM v2&v3 as training data. For the ASR system used to extract BN features in TTS, we use text-to-speech data in AISHELL-3 and Chinese speech in GigaS2S, along with the corresponding Chinese text in GigaST, as the training set. Since the test set's MT output text is a mix of Chinese and English, including names of people and places, the TTS module needs to support both languages. Therefore, we also add the aforementioned English data to the training set.
§.§.§ MT Data
We use the text-parallel data including News Commentary and OpenSubtitles2018 as MT training set. Moreover, we also add the Chinese texts in GigaST and the English texts in GigaSpeech corresponding to the Chinese texts in GigaST to the training set.
§.§.§ TTS Data
We use AISHELL-3 as training data in Text-to-BN and audio super-resolution. For the pre-trained speaker encoder, we adopt LibriSpeech, which contains 1166 speakers, as the training data.
For the BN-to-speech model, in addition to using AISHELL-3 which has 218 speakers, we also use LibriSpeech to meet the data amount and speaker number requirements of zero-shot TTS.
§.§ Data Pre-processing
§.§.§ ASR Data
To prepare the ASR data, we pre-process all transcripts to remove audio-related tags. Next, we map the text to the corresponding byte-pair encoding (BPE) unit and count the number of BPE units in the ASR dictionary, which totals 5,000 units. For audio processing, we use a frame shift of 10ms and a frame length of 25ms and normalize all audio to 16KHz.
§.§.§ MT Data
For the MT data, we use the same tokenizer as mBART50 to perform sub-word segmentation for English and Chinese texts and to organize them into a format for neural network training. By doing so, we can maximize the benefits of initializing our translation model with mBART50 pre-trained model parameters. The mBART tokenizer mentioned above is a Unigram tokenizer. A Unigram model is a type of language model that considers each token to be independent of the tokens before it. What’s more, the tokenizer has a total of 250,054 word segmentations, supports word segmentation processing for English, Chinese, and other languages, and uses special tokens like <s>, </s>, and <unk>.
§.§.§ TTS Data
For AISHELL-3, we downsample it to 16KHz and 24KHz respectively as the TTS modeling target and the audio super-resolution modeling target. All other data is down-sampled to 16KHz. All data in TTS adopts 12.5ms frame shift and 50ms frame length.
Speech Enhancement.
Given the presence of substantial background noise in the test set, the discriminative power of speaker embeddings is significantly reduced, thereby impeding the performance of the TTS module. Furthermore, the ASR data incorporated during the training of the BN-to-speech model is also subject to background noise. Therefore, we employ a single-channel wiener filtering method <cit.> to remove such noise from these data. Please note that we do not perform speech enhancement on the test set in the ASR module, because there is a mismatch between the denoised audio and which is used in ASR training, and denoising will reduce the speech recognition accuracy.
§.§.§ Evaluation Data
For all evaluations, we use the English-Chinese (En-Zh) development data divided by the organizer from GigaSpeech, GigaST and GigaS2S, including 5,715 parallel En-Zh audio segments, and their corresponding En-Zh texts. It is worth noting that the development data for evaluations has been removed from the training dataset.
§ EXPERIMENTS
§.§ Experimental Setup
All the models in our system are trained on 8 A100 GPUs and optimized with Adam <cit.>.
ASR Module. All ASR models are implemented in ESPnet[<https://github.com/espnet/espnet>]. Both Conformer and E-Branchformer models employ an encoder with 17 layers and a feature dimension of 512, with 8 heads in the self-attention mechanism and an intermediate hidden dimension of 2048 for the FFN. In addition, we employ a 6-layer Transformer decoder with the same feature hidden dimension as the encoder. The E-Branchformer model uses a cgMLP with an intermediate hidden dimension of 3072. The total number of parameters for the Conformer and E-Branchformer model in Section 2.1 is 147.8M and 148.9M respectively. We train the models with batch size 32 sentences per GPU for 40 epochs, and set the learning rate to 0.0015, the warm-up step to 25K.
For data augmentation, we conduct speed perturbation, pitch shifting, and audio codec on the original recordings. Spectrum augmentation and noise augmentation are used for on-the-fly model training.
MT Module. All MT models are implemented in HuggingFace[<https://github.com/huggingface/transformers>]. Using MT data, we fine-tune the mBART-50 large model, which has 611M parameters, with a batch size of 32 sentences per GPU for 20 epochs. The learning rate is set to 3e-5 and warmed up for the first 10% of updates and linearly decayed for the following updates. For fine-tuning using the MT data in ASR transcription format and the ASR outputs, we also fine-tune the model with batch size 32 sentences per GPU for 5 epochs and set the learning rate to 3e-5, which is warmed up for the first 5% of updates and linearly decayed for the following updates.
TTS Module.
We complete our system based on VITS official code[<https://github.com/jaywalnut310/vits>]. The text-to-BN follows the configuration of DelightfulTTS and has about 64M parameters. To extract the duration required for text-to-BN, we train a Kaldi[<https://github.com/kaldi-asr/kaldi>] model using AISHELL-3. The ASR system used for extracting BN is the Chinese-English ASR model mentioned in Section 5.1.1. For BN-to-speech, we use a 6-layer FFT as the BN encoder and follow the other configuration in VIsinger2 with about 45M parameters in total. The pitch predictor has 4 layers of Conv1D with 256 channels. Pitch is extracted by Visinger2 decoder and DSPGAN from Harvest <cit.> with Stonemask. To predict pitch in DSPGAN, we use the method described in Section 4.3. Up-sampling factors in DSPGAN is set as [5, 5, 4, 3] and other configuration of DSPGAN-mm is preserved for audio super-resolution. The DSPGAN model has about 9M parameters in total. We train all the above models with a batch size of 64 sentences per GPU for 1M steps and set the learning rate to 2e-4. For the pre-trained speaker encoder, we follow the model configuration and training setup of ECAPA-TDNN (C=1024) with 14.7M parameters.
§.§ Evaluation Models
Baseline. To evaluate the effectiveness of the proposed cascaded S2ST system, we adopt the original cascaded S2ST system as a baseline, including an E-Branchformer ASR model, a mBART50 MT model fine-tuned using the MT data, and an end-to-end TTS model based on VITS trained with AISHELL-3.
Proposed system & Ablation study.
We further conduct ablation studies to evaluate each component in the proposed system. Specifically, the ablation studies are designed to verify the effectiveness of model fusion and data augmentation in ASR, three-stage fine-tuning, back translation, cross-verification in MT, two-stage training with BN, pre-trained speaker embedding, and audio super-resolution in TTS.
§.§ Results & Analysis
We conduct experiments on the effectiveness of each sub-module and the performance of our proposed cascaded S2ST system.
§.§.§ ASR Module
We calculate the word error rate (WER) of each ASR module to evaluate the English speech recognition accuracy. As shown in Table <ref>, the WER of the proposed system has a significant drop compared with the baseline, which indicates that the proposed system greatly improves the recognition accuracy. Moreover, the results of the ablation study demonstrate the effectiveness of both model fusion and data augmentation in improving speech recognition accuracy.
§.§.§ MT Module
We evaluate our MT module in terms of the BLEU score, which measures the n-gram overlap between the predicted output and the reference sentence.
As shown in Table <ref>, the proposed system with three-stage fine-tuning achieves a significantly better BLEU score than the baseline, demonstrating the effectiveness of curriculum learning in our scenario. Furthermore, by incorporating back translation and cross-validation, the translation performance can be further improved.
§.§.§ TTS Module
We calculate the character error rate (CER) to evaluate the clarity of speech for each TTS module. The ASR system used for calculating CER is the Chinese-English ASR model mentioned in Section 5.1.1. Additionally, we conduct mean opinion score (MOS) tests with ten listeners rating each sample on a scale of 1 (worst) to 5 (best) to evaluate naturalness, sound quality, and speaker similarity.
In the ablation study without pre-trained speaker embedding, speaker ID is to control the speaker timbre of the synthesized speech. To eliminate the influence of ASR and MT results on TTS evaluation, we use the Chinese text in the evaluation data and its corresponding English source speech as the reference of speaker timbre as the test set for TTS evaluation.
As shown in Table <ref>, our proposed system has achieved significant improvement in naturalness, sound quality, speaker similarity, and clarity of speech compared with the baseline. Interestingly, the system without pre-trained speaker embedding has better sound quality than both the proposed system and recording. We conjecture the reason is that the pre-trained speaker embedding greatly influences the sound quality in the zero-shot TTS setup. Therefore, the quality of the synthesized 24KHz audio is superior to the 16KHz recording, which can be demonstrated by the 3.64 MOS score of the system without audio super-resolution. Meanwhile, the speaker similarity MOS score is very low due to the lack of generalization ability to unseen speakers.
Without using the BN-based two-stage model, the system decreases performance on all indicators, which shows the effectiveness of BN as an intermediate representation in our experimental scenario.
§.§.§ System Evaluation
Finally, we calculate the ASR-BLEU score for the baseline and the proposed system to evaluate the speech-to-speech translation performance. Specifically, we use the ASR system to transcribe the Chinese speech generated by TTS, and then compute the BLEU scores of the ASR-decoded text with respect to the reference English translations. The ASR system for transcribing Chinese speech is the same as that in Section 6.2.3.
As shown in Table <ref>, our proposed system achieves a higher ASR-BLEU score than the baseline, which indicates that our proposed system has good speech-to-speech translation accuracy.
§ CONCLUSION
This paper describes the NPU-MSXF speech-to-speech translation system, which we develop for the IWSLT 2023 speech-to-speech translation task. Our system is built as a cascaded system that includes ASR, MT, and TTS modules. To ensure good performance with multi-source data, we improved each module using various techniques such as model fusion and data augmentation in the ASR, three-stage fine-tuning, back translation, and cross-validation in the MT, and two-stage training, pre-trained speaker embedding, and audio super-resolution in the TTS. Through extensive experiments, we demonstrate that our system achieves high translation accuracy, naturalness, sound quality, and speaker similarity with multi-source input.
§ APPENDIX
We present the official results, which include our submitted system and those of other teams. As shown in Table <ref>, our system ranks 1st in speech quality score and 2nd in translation quality score. By equally weighting translation quality and speech quality, our submitted system achieves the highest overall score in human evaluation. Although the organizers provide both automatic and human evaluation scores, the systems are ranked based on human evaluation. Consequently, our submitted system ranks 1st in the S2ST task of the IWSLT 2023 evaluation campaign. Additionally, as illustrated in Table <ref>, we rank 2nd and closely follow the 1st place in automatic evaluation, which evaluates translation accuracy. Our system employs zero-shot voice cloning, which may result in a slight loss of sound quality and speech clarity. We believe our automatic evaluation results could be better without using zero-shot voice cloning. However, this trade-off allows us to achieve a significant improvement in speaker timbre similarity and naturalness.
|
http://arxiv.org/abs/2307.04351v1 | 20230710052343 | MD-HIT: Machine learning for materials property prediction with dataset redundancy control | [
"Qin Li",
"Nihang Fu",
"Sadman Sadeed Omee",
"Jianjun Hu"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cs.LG"
] |
On Sufficient Graphical Models
Bing Li [email protected]
Department of Statistics, Pennsylvania State University
326 Thomas Building, University Park, PA 16802
Kyongwon Kim [email protected]
Department of Statistics, Ewha Womans University
52 Ewhayeodae-gil, Seodaemun-gu, Seoul, Republic of Korea, 03760
August 12, 2023
=======================================================================================================================================================================================================================================================================================================================
Materials datasets are usually featured by the existence of many redundant (highly similar) materials due to the tinkering material design practice over the history of materials research. For example, the materials project database has many perovskite cubic structure materials similar to SrTiO_3. This sample redundancy within the dataset makes the random splitting of machine learning model evaluation to fail so that the ML models tend to achieve over-estimated predictive performance which is misleading for the materials science community. This issue is well known in the field of bioinformatics for protein function prediction, in which a redundancy reduction procedure (CD-Hit <cit.>) is always applied to reduce the sample redundancy by ensuring no pair of samples has a sequence similarity greater than a given threshold. This paper surveys the overestimated ML performance in the literature for both composition based and structure based material property prediction. We then propose a material dataset redundancy reduction algorithm called MD-HIT and evaluate it with several composition and structure based distance threshold sfor reducing data set sample redundancy. We show that with this control, the predicted performance tends to better reflect their true prediction capability. Our MD-hit code can be freely accessed at <https://github.com/usccolumbia/MD-HIT>
§ INTRODUCTION
Density functional theory (DFT) level accuracy of material property prediction <cit.> and >0.95 R^2 for thermal conductivity prediction <cit.> with less than a hundred training samples have been routinely reported recently by an increasing list of machine learning algorithms in the material informatics community. In <cit.>, an AI model was shown to be able to predict formation energy of a hold-out test set containing 137 entries from their structure and composition with a mean absolute error (MAE) of 0.064 eV/atom which significantly outperform the performance of DFT computations for the same task (discrepancies of >0.076 eV/atom). In another related work in Nature Communication by the same group <cit.>, a mean absolute error (MAE) of 0.07 eV/atom was achieved for composition only based formation energy prediction using deep transfer learning, which is comparable to the MAE of DFT-computation. Pasini et al <cit.> reported that their multitasking neural networks can estimate the material properties (total energy, charge density and magnetic moment) for a specific configuration hundreds of times faster than first-principles DFT calculations while achieving comparable accuracy. In <cit.>, the authors claimed their graph neural network models can predict the formation energies, band gaps, and elastic moduli of crystals with better than DFT accuracy over a much larger data set. In <cit.>, Farb et al. showed numerical evidence that ML model predictions deviate from DFT less than DFT deviates from experiment for all nine properties that they evaluated over the QM9 molecule dataset. They also claimed the out-of-sample prediction errors with respect to hybrid DFT reference were on par with, or close to, chemical accuracy. In <cit.>, Tian et al reported that current ML models can achieve accurate property-prediction (formation energy, band gap, bulk and shear moduli) using composition alone without using structure information, especially for for compounds close to the thermodynamic convex hull. However, this good performance may be partially due to the over-represented redundancy in their test samples obtained with 6:2:2 random selection from matminer datasets without redundancy control. To illustrate this point, Figure <ref> shows the formation energy and band gap landscape over the MP composition space, which is generated by mapping the Magpie features of all MP unique compositions to the 2D space using t-SNE and then plot the surface. Both figures show that there exist a large number of local areas with smooth or similar property values. Random splitting of samples in those areas into training and test sets may lead to information leakage and over-estimation of the prediction performance.
Despite these encouraging successes, the DFT accuracy reports of these ML models for material property prediction should be cautiously interpreted as they are all average performance evaluated over mostly randomly held-out samples that come from unexpectedly highly redundant datasets. Materials databases such as Material Project and OQMD are characterized by the existence of many redundant (highly similar) materials due to the tinkering material design practice over the history of materials research. For example, the materials project database has many perovskite cubic structure materials similar to SrTiO_3. This sample redundancy within the dataset makes the random splitting of machine learning model evaluation to fail so that the ML models tend to achieve over-estimated predictive performance which is misleading for the materials science community. This issue is well known in the area of ecology <cit.> and bioinformatics for protein function prediction, in which a redundancy reduction procedure (CD-Hit <cit.>) is required to reduce the sample redundancy by ensuring no pair of samples has a sequence similarity greater than a given threshold e.g. 95% sequence identity. In a recent work in 2023, it was also shown that excellent benchmark score may not imply good generalization performance <cit.>.
The over-estimation of the ML performance for materials has been investigated in a few studies. In <cit.>, Meredig et al. examined extrapolation performance of ML methods for materials discovery. They found that traditional ML metrics (even with cross-validation (CV)) overestimate model performance for materials discovery and introduce the leave-one-(material) cluster-out cross-validation (LOCO CV) to objectively evaluate the extrapolation performance of ML models. They especially highlighted that materials scientists often intend to extrapolate with trained ML models, rather than interpolate to find new functional materials and sampling in materials training data is typically highly non-uniform. So the high interpolation performance of ML models trained with datasets with high sample redundancy (e.g. due to doping) does not indicate their strong capability to discovery new materials (out of dotmain (OOD) samples). They showed that current ML models have much higher difficulty to generalize from the training clusters to a distinct test cluster. They suggested the use of uncertainty quantification (UQ) on top of ML models to evaluate and explore candidates in new regions of design space. Stanev et al. <cit.> also discussed this generalization issue across different superconductor family. In <cit.>, Xiong et al. propose K-fold forward cross-validation (FCV) as a new way for evaluating exploration performance in materials property prediction by first sorting the samples by their property values before CV splitting. They showed that current ML models' prediction performance were actually very low as shown by their proposed
FCV evaluation method and the proposed exploratory prediction accuracy. A similar study for thermal conductivity prediction <cit.> also showed that when ML models are trained with low property values, they are usually not good at predicting samples with high property values, indicating the weak extrapolation capability. These studies show the need for the material property model developers to focus more on extrapolative prediction performance rather than average interpolation performance over test samples with high similarity to training samples due to dataset redundancy.
The material datasets redundancy issue has also been studied recently from the point of view of training efficient ML models or achieving sample efficiency. In <cit.>, Magar and Farimani proposed an adaptive sampling strategy to generate/sample informative samples for training machine learning models in the lowest amounts of data. They assumed that informative samples for a model are those with the highest K(e.g. 250) MAE in the test set, which are added to the initial 1000 training set iteratively. Another selection approach is to add samples similar to data points of the train set having the maximum MAE during training. They showed that their sampling algorithms can create smaller training sets that obtain better performance than the baseline CGCNN model trained with all training samples. This approach can be used with active learning to build high performance ML models in a data efficient way. In a more recent work <cit.>, Li et al. studied the redundancy in large material datasets and found that a significant degree of redundancy across multiple large datasets is present for various material properties and that up to 95% of data can be removed from ML model training with little impact on prediction performance for test sets sampled randomly from the same distribution dataset. They further showed that the redundant data is due to over-represented material types and does not help improve the low performance on out-of-distribution samples. They proposed a pruning algorithm similar to <cit.> which first splits the training set into A and B and then train a ML model on A and evaluates the prediction errors on samples in B. After that the test samples with low MAEs are pruned and the remaining samples are merged and split into A and B again and so on. Both approaches rely on the iterative training of ML models and are specific to a given material property. The also proposed an uncertainty quantification based active learning method to generate sample efficient training set for model training. While these works recognize the possibility to build data-efficient training set, they did not mention the how redundancy can affect the over-estimated ML model performance commonly seen in literature. Moreover, all approaches for building informative training set are material property specific, making it difficult to generate a single non-redundant benchmark dataset for benchmarking material property prediction algorithms for all material properties. Another limitation of these methods is that they show different similarity thresholds when applied to different datasets, which makes the resulting non-redundant datasets to have different minimum distances among the samples.
Since material property prediction research is now pivoting toward developing ML models with high accuracy, that are generalizable and transferable between different materials (including materials of different families), healthy evaluation of ML algorithms is needed to recognize the limitation of existing ML models and to invent new models with essential process. Within this context, reducing the dataset redundancy of both training set and test sets can avoid the over-estimation of the ML model performance, ameliorate the training bias towards samples in crowded areas, and push the model developers to focus on improving extrapolation performance instead of only interpolation performance.
In this paper, we argue the importance of redundancy control in the training and test set selection to achieve objective performance evaluation. Neglecting this has lead to many overestimated ML performances as reported in the literature for both composition based and structure based material property prediction. We then conduct the ML experiments to show that the over-estimated models usually fail for samples that are distant to training samples (lack of extrapolation performance). We then developed two redundancy reducing algorithms (CD-hit-composition and CD-hit-structure) with open-sourced code for reducing the dataset redundancy of both composition datasets and structure datasets. These two algorithms are based on composition and structure based distance metrics, which are used to add samples that are above a defined distance threshold. After this data redundancy control, the dataset can then be splitted randomly into training, validation, and test sets to achieve objective performance evaluation. We show that with this dataset redundancy control, the predicted performance tends to reflect their true prediction capability.
§ METHOD
§.§ MD-HIT-composition algorithm for redundancy reduction of composition datasets
The early version of CD-HIT algorithm <cit.> of bioinformatics was originally developed to handle large-scale sequence datasets efficiently. It employs a clustering approach to group similar sequences together based on a defined sequence identity threshold. Within each cluster, only one representative sequence, called the "centroid," is retained, while the rest of the highly similar sequences are considered duplicates and removed. However, the clustering approach is still inefficient to deal with datasets with hundreds of thousands of sequences. The next generation of CD-HIT further improved the efficiency by using a greedy algorithm <cit.>.
Both of our MD-HIT-composition and MD-HIT-structure redundancy reduction algorithms are designed based on this idea, which are greedy incremental algorithms. In our case, the MD-HIT starts the selection process with a seed material (default to be H2O). And then it sorts the remaining materials by the number of atoms instead of the formula lengths and then one-by-one classifies them as a redundant or representative material based on its similarities to the existing representatives already selected into the cluster. The composition similarities are estimated using the ElMD (Earth Movers' Distance) package, which provides the options to choose linear, chemically derived, and machine learned similarity measures. By default, we used the mendeleev similarity and the magpie similarity <cit.> for our non-redundant composition dataset generation. The magpie distance function is defined as the Euclidean distance of a given set of material composition magpie feature vectors such as the widely used magpie features <cit.>. In the matminer materials informatics package, there are several other material composition descriptors that can also be used as well. Here we focused on ElMD and the magpie feature based distance function for redundancy control of composition datasets for materials property prediction.
The complete composition similarity metrics can be found in Table <ref>.
§.§ MD-HIT-Structure algorithm for redundancy reduction of structure datasets
MD-HIT-structure algorithm uses the same greedy adding approach of the MD-HIT-composition except it uses a structure based distance metric. However, due to the varying number of atoms of different crystals, it is non-trivial to compare the similarity of two given structures given most of structure descriptors tend to have different dimension for structures of different number of atoms. Here we chose two structure distances for redundancy reduction. One is the distance metric based on XRD features calculated from crystal structures. We used a Gaussian smoothing operation to first smooth the calculated XRD using the Pymatgen XRDCalculator module and then sample 900 points even distributed between 0 and 90 degree, which leads to XRD features of a fixed 900 dimension.
We also selected the OrbitalFieldMatrix feature to calculate the distances of two structures. This feature has also been used in <cit.> to select informative samples for ML model training. It is a set of descriptors that encode the electronic structure of a material. These features provide information about the distribution of electrons in different atomic orbitals within a crystal structure. These features provide a comprehensive representation of the electronic structure and bonding characteristics of materials and is of fixed dimension (1024).
Similar to the MD-Hit-composition, MD-Hit-structure algorithm also starts the selection process with a seed material (default to be H2O) put in the non-redundant set. And then it sorts the remaining materials in the candidate set by the number of atoms instead of the formula lengths and then one-by-one classifies them as a redundant or representative material based on its similarities (we use Euclidean distance of XRD features or OFM features) to the existing representatives already selected into the non-redundant set. Redundant samples are discarded while non-redundant ones are added to the non-redundant set until the candidate set is empty.
§.§ Composition based materials property prediction algorithms
We evaluate two state-of-the-art composition based material property prediction algorithms including Roost <cit.> and Crabnet (the Compositionally Restricted Attention-Based network)<cit.> to study the impact of dataset redundancy on their performance. The Roost algorithm is a machine learning approach specifically designed for materials property prediction based on the material composition. It utilizes a graph neural network framework to learn relationships between material compositions and their corresponding properties. CrabNet is a transformer self-attention based model for composition only material property prediction. It matches or exceeds current best-practice methods on nearly all of 28 total benchmark datasets.
§.§ Structure based material property prediction algorithms
We evaluate two state-of-the-art structure based material property prediction algorithms including ALIGNN (Atomistic Line Graph Neural Network)<cit.> and DeeperGATGNN<cit.> to study the impact of dataset redundancy on their performance. The ALIGNN model addresses a major limitation of the majority of current Graph Neural Network (GNN) models used for atomistic predictions, which only rely on atomic distances while overlooking the bond angles. Actually bond angles play a crucial role in distinguishing various atomic structures and small deviations in bond angles can significantly impact several material properties. ALIGNN is a GNN architecture that conducts message passing on both the interatomic bond graph and its corresponding line graph specifically designed for bond angles. It has achieved state-of-art performances in most benchmark problems of the matbench <cit.>. The DeeperGATGNN algorithm is a global attention based graph neural network that uses differentiable group normalization and residual connection to achieve high performance deep graph neural networks without performance degradation. It has achieved superior results as shown in a set of material property predictions.
§.§ Evaluation criteria
We use the following performance metrics for evaluating dataset redundancy's impact on model performance, including Mean Absolute Error (MAE), R-squared (R^2), and Root Mean Squared Error (RMSE)
Mean Absolute Error (MAE):
MAE = 1/n∑_i=1^n| y_i - ŷ_i |
R-squared (R^2):
R^2 = 1 - ∑_i=1^n (y_i - ŷ_i)^2/∑_i=1^n (y_i - y̅)^2
Where y_i represents the observed or true values, ŷ_i represents the predicted values, and y̅ represents the mean of the observed values. The summation symbol ∑ is used to calculate the sum of values, and n represents the number of data points in the dataset.
§ RESULTS
§.§ Datasets generation
We downloaded 125,619 cif strutures from the Material Project database, which contains 89,354 unique compositions. For compositions that correspond to multiple polymorphs, we choose the average material property values as the default property value for that composition except for formation energy we use the minimum value. We also dropped the mp-101974 (HeSiO2) which has issue to calculate their Magpie features. We then remove all formulas with more than 50 atoms and got a non-duplicate composition dataset with 86,741 samples. We then use different similarity (distance) thresholds to generate non-redundant data sets. For mendeleev similarity, we use distance thresholds of 0.5, 0.8, 1, 1.5, 2, 2.5 and 3 to generate seven non-redundant datasets. The dataset sizes range from 86740 to 3177. Similarly we generate eight matsholar non-redundant datasets. The percentages of total range from 50.82% to 2.33%. We also applied the MD-HIT-structure algorithm to all the 125,619 cif structures and use different thresholds to generate seven XRD non-redundant datasets and eight OFM non-redundant datasets.
After removal of redundancy based on varying degree of sample identity using MD-HIT algorithms, the details of all non-redundant datasets are shown in Table 2.
To visually understand the effect of redundancy removal of datasets, Figure <ref> shows the material distribution t-SNE maps of the full dataset and two non-redundant datasets. For each dataset, we calculate the magpie composition features for all its samples. Then we use t-SNE dimension reduction algorithm to map the features to two dimension space. Figure 2(a) shows the distribution of whole dataset, which are filled crowded samples with high redundancy. Figure 2(b) shows the less redundant dataset Matscholar-nr generated with threshold of 0.1. It contains only 50.82% samples while still covering the whole map. Figure 2(c) shows the Mendeleev-nr non-redundant dataset with only 4,930 samples, only 5.68% of the whole dataset while still covering the whole map with much lower redundancy. The non-redundant datasets thus allow us to test the true generalization capability when trained and tested on them.
§.§ Composition based material property prediction with redundancy control
To examine the material properties prediction performance of ML models using datasets with Mendeleev distance and Matscholar distance based redundancy control, we conducted a series of experiments to explore how the degree of redundancy affects the ML performance for formation energy and band gap prediction. The non-redundant datasets derived from the whole MP composition dataset with 86,740 samples using different thresholds were divided into training, validation, and testing sets with a ratio of 8:1:1, respectively. Figure <ref> and <ref> show a comparison of the performances of Roost and CrabNet for formation energy and band gap prediction on datasets of different sizes, filtered by Mendeleev distance thresholds of 0, 0.5, 0.8, 1, 1.5, 2, 2.5 and 3 and Matscholar distance thresholds of 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35 and 0.4.
Figure <ref>(a) shows the prediction performances (MAE and R^2) of Roost and CrabNet for formation energy prediction evaluated over the whole dataset and six non-redundant datasets. It is found that the performance of both models exhibits a deteriorating trend with the increasing thresholds corresponding to lower degree of data redundancy, as evidenced by the diminishing R2 and increasing MAE scores. For band gap prediction (Figure <ref>(b)), the R2 scores of both models are decreasing gradually with the increase of the threshold. While the MAE scores exhibit a general uptrend, they do not exhibit a consistent decline with respect to the increasing threshold. Instead, they exhibit abrupt jumps at certain points. This could be due to outliers in the band gap-target datasets, which also shows the higher challenges for band gap prediction.
Figure <ref> shows the ML performances over the matscholar-controlled non-redundant datasets. In Figure <ref> (a), we found that the correlations between prediction performances of both Roost and CrabNet and thresholds (or data redundancy) are much higher than those shown in Figure <ref>(a), indicating that the matscholar distance tends to generate more evenly distributed non-redundant datasets compared to Mendeleev distance. However, this consistent trends of MAE and R^2 do not hold in the bandgap prediction performance shown in Figure <ref>(b), in which the R^2 curves are similar to those found in Figure <ref>(b) while the band gap prediction performances have large variation across different thresholds. We have checked this phenomenon by running multiple experiments for each threshold and got similar results. One possible reason is that a large percentage of bandgap samples have zero values. Overall, we found that removing redundancy of the datasets allows us to obtain more objective performances of ML models.
Through experiments, we observe that without reducing redundancy, a significant portion of test samples are concentrated in crowded areas with low prediction errors. This occurs because the model may overly rely on the information from these redundant samples during the learning process, while disregarding other more diverse data features. Excessive sample redundancy can potentially lead to deceptive phenomena on the test set.
§.§ Structure based material property prediction with redundancy control
To investigate the redundancy control of structure-based material datasets, we downloaded the whole Material Project database of 123,108 crystal structures along with their formation energy per atom and band gaps. Then we use the XRD and OFM features of crystal structures to define the similarity between pairs of structures, which is used to control the structure redundancy using the thresholds the minimum XRD/OFM distance between any pair of samples. For XRD based non-redundant datasets, we used the thresholds of 0.5, 0.6, 0.8, and 0.9. We then evaluated the material property prediction performances of two state-of-the-art graph neural network algorithms including DeeperGATGNN and ALIGNN. The results are shown in Figure <ref> (a) for formation energy prediction and Figure <ref> (b) for band gap prediction.
First we found that the XRD-distance provides a good control of data redundancy as the MAEs of both algorithms gradually increase with the increasing XRD thresholds, corresponding to lower dataset redundancy (Figure <ref> (a)). Simultaneously, the R^2 scores decrease as the thresholds go up. For band gap prediction result in Figure <ref> (b), the degree of dataset redundancy also affects the performance of both algorithms, though with a more complex effect compared to formation energy prediction results. First, it can be found that the R^2 scores of both algorithms drop down with the increasing thresholds. However, while the MAEs of the DeeperGATGNN go up overall with increasing thresholds, the MAEs of ALIGNN over the non-redundant with thresholds 0.8 and 0.9 are actually lower than the result over the dataset with threshold of 0.6 while the R^2 scores are lower. This discrepancy indicates for the bandgap prediction problem, there is a higher nonlinearity and the outlier band gap values may also play a role here. This phenomenon is also observed in the composition based results in Figure <ref> and Figure <ref>.
We further evaluated how OFM-controlled data redundancy affects the algorithms' performance. Figure <ref>(a) and (b) show how the performances in terms of MAE and R^2 change with the decreasing redundancy (or increasing thresholds). First we found that both algorithms showed high consistency in the formation energy prediction (Figure <ref>(a)). For both algorithms, the R^2 scores decreases in general with the increasing thresholds while the MAE scores increase. This indicates that OFM distance metric can be used as a good redundancy control method for crystal structure dataset. However, for band gap prediction, Figure <ref>(b) shows a surprising result: the R^2 scores go down with the increasing threshold as expected for both algorithms. However,the MAE scores also go down, which is unexpected since lower redundancy should lead to higher challenge for property prediction. To investigate the issue, we count the percentages of near-zero bandgap (<0.01 eV) samples of the test sets for all the five datasets with thresholds 0, 0.15, 0.2, 0.45, 0.7 and found that while the whole redundant dataset contains only 48.64% near-zero bandgap samples, our MD_HIT algorithm accidentally tend to pick higher percentage of near-zero bandgap samples with 64.09%, 67.81%, 84.52%, and 92.43% for thresholds 0.15, 0 2, 0.45, 0.7 respectively, which makes the prediction to be much easier, which explains why the MAEs drop. To further illustrate this data bias, we plotted the scatter plots of the predicted bandgaps by DeeperGATGNN over the whole datasets and two non-redundant datasets. We can clearly see that the dominance (92.43%) of near-zero samples in non-redundant dataset with threshold 0.7, which makes the prediction to be much easier compared to the whole dataset. This data bias may be reduced by choosing a different seed structure rather than the SrTiO_3 as used in this experiment. It also shows the importance to watch for data bias which can easily lead to over-estimated ML model performance in material property prediction.
§ CONCLUSION
Large material databases such as Materials Project usually contain high degree of redundancy, which causes biased ML models and over-estimated performance evaluations due to the redundancy between randomly selected test samples and the remaining training samples. The claimed DFT accuracy averaged over all data samples from literature deviates from the common needs of material scientists who usually want to discover new materials that are different from the known training samples, which makes it important to evaluate and report the extrapolation rather than interpolation material property prediction performance.
Here we propose and develop two material dataset redundancy reducing algorithms based on a greedy algorithm inspired by the peer bioinformatics CD-HIT algorithm. We use two composition distance metrics and two structure distance metrics as the thresholds to control sample redundancy of our composition and structure datasets. Our benchmark results over two composition based and two structure based material property prediction algorithms over two material properties (formation energy and band gap) showed that the prediction performance of current ML models all tend to degrade due to the removal of redundant samples, leading to more realistic measure of prediction performance of current ML material property models. The availability of our easy-to-use open-source code of MD-HIT-composition and MD-HIT-structure makes it easy for researchers to conduct objective evaluation and report realistic peformance of their ML models for material property prediction. It should be also noted that the current multi-threaded implementation of our MD-hit algorithms are still slow and more improvements are highly desirable.
§ DATA AND CODE AVAILABILITY
The source code and the non-redundant datasets can be freely accessed at https://github.com/usccolumbia/MD-HIT
§ CONTRIBUTION
Conceptualization, J.H.; methodology,J.H. Q.L.,S.L.,E.S.,Y.Z.; software, J.H., S.S.,Y.S., S.O.; resources, J.H.; writing–original draft preparation, J.H., S.S., Y.S.,S.O.,S.L.,E.S.,Y.Z.; writing–review and editing, J.H; visualization, J.H. and S.S.; supervision, J.H.; funding acquisition, J.H.
§ ACKNOWLEDGEMENT
Qin Li would like to thank for the computing support of the State Key Laboratory of Public Big Data, Guizhou University.
unsrt
|
http://arxiv.org/abs/2307.04487v1 | 20230710112041 | The abundance and excitation of molecular anions in interstellar clouds | [
"M. Agundez",
"N. Marcelino",
"B. Tercero",
"I. Jimenez-Serra",
"J. Cernicharo"
] | astro-ph.GA | [
"astro-ph.GA"
] |
Molecular anions in the ISM
Agúndez et al.
Instituto de Física Fundamental, CSIC, Calle Serrano 123, E-28006 Madrid, Spain
[email protected]
Observatorio Astronómico Nacional, IGN, Calle Alfonso XII 3, E-28014 Madrid, Spain
Observatorio de Yebes, IGN, Cerro de la Palera s/n, E-19141 Yebes, Guadalajara, Spain
Centro de Astrobiología (CSIC/INTA), Ctra. de Torrejón a Ajalvir km 4, 28806, Torrejón de Ardoz, Spain
We report new observations of molecular anions with the Yebes 40m and IRAM 30m telescopes toward the cold dense clouds TMC-1 CP, Lupus-1A, L1527, L483, L1495B, and L1544. We detected for the first time C_3N^- and C_5N^- in Lupus-1A and C_4H^- and C_6H^- in L483. In addition, we report new lines of C_6H^- toward the six targeted sources, of C_4H^- toward TMC-1 CP, Lupus-1A, and L1527, and of C_8H^- and C_3N^- in TMC-1 CP. Excitation calculations using recently computed collision rate coefficients indicate that the lines of anions accessible to radiotelescopes run from subthermally excited to thermalized as the size of the anion increases, with the degree of departure from thermalization depending on the H_2 volume density and the line frequency. We noticed that the collision rate coefficients available for the radical C_6H cannot explain various observational facts, which advises for a revisitation of the collision data for this species. The observations presented here, together with observational data from the literature, are used to model the excitation of interstellar anions and to constrain their abundances. In general, the anion-to-neutral ratios derived here agree within 50 % (a factor of two at most) with literature values, when available, except for the C_4H^-/C_4H ratio, which shows higher differences due to a revision of the dipole moment of C_4H. From the set of anion-to-neutral abundance ratios derived two conclusions can be drawn. First, the C_6H^-/C_6H ratio shows a tentative trend in which it increases with increasing H_2 density, as expected from theoretical grounds. And second, it is incontestable that the higher the molecular size the higher the anion-to-neutral ratio, which supports a formation mechanism based on radiative electron attachment. Nonetheless, calculated rate coefficients for electron attachment to the medium size species C_4H and C_3N are probably too high and too low, respectively, by more than one order of magnitude.
The abundance and excitation of molecular anions
in interstellar cloudsBased on observations carried out with the Yebes 40m telescope (projects 19A003, 20A014, 20A016, 20B010, 20D023, 21A006, 21A011, 21D005, 22B023, and 23A024) and the IRAM 30m telescope. The 40m radio telescope at Yebes Observatory is operated by the Spanish Geographic Institute (IGN; Ministerio de Transportes, Movilidad y Agenda Urbana). IRAM is supported by INSU/CNRS (France), MPG (Germany), and IGN (Spain).
M. Agúndez1, N. Marcelino2,3, B. Tercero2,3, I. Jiménez-Serra4, J. Cernicharo1
Received; accepted
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The discovery of negatively charged molecular ions in space has been a relatively recent finding <cit.>. To date the inventory of molecular anions detected in interstellar and circumstellar clouds consists of four hydrocarbon anions, C_4H^- <cit.>, C_6H^- <cit.>, C_8H^- <cit.>, and C_10H^- <cit.>, and four nitrile anions, CN^- <cit.>, C_3N^- <cit.>, C_5N^- <cit.>, and C_7N^- <cit.>. The astronomical detection of most of these species has been possible thanks to the laboratory characterization of their rotational spectrum <cit.>. However, the astronomical detection of C_5N^-, C_7N^-, and C_10H^- is based on high level ab initio calculations and astrochemical arguments <cit.>. In fact, in the case of C_10H^- it is not yet clear whether the identified species is C_10H^- or C_9N^- <cit.>.
The current situation is such that there is only one astronomical source where the eight molecular anions have been observed, the carbon-rich circumstellar envelope IRC +10216 <cit.>, while the first negative ion discovered, C_6H^- <cit.>, continues to be the most widely observed in astronomical sources <cit.>.
Observations indicate that along each of the series C_2n+2H^- and C_2n-1N^- (with n = 1, 2, 3, 4) the anion-to-neutral abundance ratio increases with increasing molecular size <cit.>. This is expected according to the formation mechanism originally proposed by <cit.>, which involves the radiative electron attachment to the neutral counterpart of the anion <cit.>. However, the efficiency of this mechanism in interstellar space has been disputed <cit.>, and alternative formation mechanisms have been proposed <cit.>. Currently there is not yet consensus on the formation mechanism of molecular anions in space (see discussion in ). Moreover, detections of negative ions other than C_6H^- in interstellar clouds are scarce, and thus our view of the abundance of the different anions in interstellar space is statistically very limited.
Apart from the anion-to-anion behavior it is also interesting to know which is the source-to-source behavior. That is, how does the abundance of anions behave from one source to another. Based on C_6H^- detections, the C_6H^-/C_6H abundance ratio seems to increase with increasing H_2 volume density <cit.>, which is expected from chemical considerations (e.g., ; see also Sect. <ref>). However, most anion detections in interstellar clouds have been based on one or two lines and their abundances have been estimated assuming that their rotational levels are populated according to local thermodynamic equilibrium (LTE), which may not be a good assumption given the large dipole moments, and thus high critical densities, of anions. Recently, rate coefficients for inelastic collisions with H_2 or He have been calculated for C_2H^- <cit.>, C_4H^- <cit.>, C_6H^- <cit.>, CN^- <cit.>, C_3N^- <cit.>, and C_5N^- <cit.>, which makes it possible to study the excitation of anions in the interstellar medium.
Here we report new detections of anions in interstellar sources. Concretely, we detected C_3N^- and C_5N^- in Lupus-1A and C_6H^- and C_4H^- in L483. We also present the detection of new lines of C_4H^-, C_6H^-, C_8H^-, C_3N^-, and C_5N^- in interstellar clouds where these anions have been already observed. We use the large observational dataset from this study, together with that available from the literature, to review the observational status of anions in interstellar clouds and to carry out a comprehensive analysis of the abundance and excitation of anions in the interstellar medium.
§ OBSERVATIONS
§.§ Yebes 40m and IRAM 30m observations from this study
The observations of cold dark clouds presented in this study were carried out with the Yebes 40m and IRAM 30m telescopes. We targeted the starless core TMC-1 at the cyanopolyyne peak position (hereafter TMC-1 CP)[TMC-1 CP: α_J2000=4^ h 41^ m 41.9^ s and δ_J2000=+25^∘ 41' 27.0”], the starless core Lupus-1A[Lupus-1A: α_J2000=15^ h 42^ m 52.4^ s and δ_J2000=-34^∘ 07' 53.5”], the prestellar cores L1495B[L1495B: α_J2000=4^ h 15^ m 41.8^ s and δ_J2000=+28^∘ 47' 46.0”] and L1544[L1544: α_J2000=5^ h 4^ m 18.0^ s and δ_J2000=+25^∘ 11' 10.0”], and the dense cores L1527[L1527: α_J2000=4^ h 39^ m 53.9^ s and δ_J2000=+26^∘ 03' 11.0”] and L483[L483: α_J2000=18^ h 17^ m 29.8^ s and δ_J2000=-4^∘ 39' 38.3”], which host a Class 0 protostar. All observations were done using the frequency switching technique to maximize the on-source telescope time and to improve the sensitivity of the spectra.
The Yebes 40m observations consisted in a full scan of the Q band (31-50 GHz) acquired in a single spectral setup with a 7 mm receiver, which was connected to a fast Fourier transform spectrometer that provides a spectral resolution of 38 kHz <cit.>. The data of TMC-1 CP are part of the on-going QUIJOTE line survey <cit.>. The spectra used here were obtained between November 2019 and November 2022 and contain a total of 758 h of on-source telescope time in each polarization (twice this value after averaging both polarizations). Two frequency throws of 8 and 10 MHz were used. The sensitivity ranges from 0.13 to 0.4 mK in antenna temperature. The data of L1544 were taken between October and December 2020 toward the position of the methanol peak of this core, where complex organic molecules have been detected <cit.>, and are part of a high-sensitivity Q-band survey (31 h on-source; Jiménez-Serra et al. in prep.). The data for the other sources were obtained from July 2020 to February 2023 for L483 (the total on-source telescope time is 103 h), from May to November 2021 for L1527 (40 h on-source), from July 2021 to January 2023 for Lupus-1A (120 h on-source), and from September to November 2021 for L1495B (45 h on-source). Different frequency throws were adopted depending on the observing period, which resulted from tests done at the Yebes 40m telescope to find the optimal frequency throw. We used frequency throws of 10 MHz and 10.52 MHz for L483, 10 MHz for L1544, 8 MHz for L1527, and 10.52 MHz for Lupus-1A and L1495B. The antenna temperature noise levels, after averaging horizontal and vertical polarizations, are in the range 0.4-1.0 mK for L483, 1.3-1.8 mK for L1544, 0.7-2.7 mK for L1527, 0.7-2.8 mK for Lupus-1A, and 0.8-2.6 mK for L1495B.
The observations carried out with the IRAM 30m telescope used the 3 mm EMIR receiver connected to a fast Fourier transform spectrometer that provides a spectral resolution of 49 kHz. Different spectral regions within the 3 mm band (72-116 GHz) were covered depending on the source. The data of TMC-1 CP consist of a 3 mm line survey <cit.> and spectra observed in 2021 <cit.>. The data of L483 consists of a line survey in the 80-116 GHz region (see ), together with data in the 72-80 GHz region, which are described in <cit.>. Data of Lupus-1A, L1495B, L1521F, L1251A, L1512, L1172, and L1389 were observed from September to November 2014 during a previous search for molecular anions at mm wavelengths (see ). Additional data of Lupus-1A were gathered during 2021 and 2022 during a project aimed to observe H_2NC <cit.>. In the case of L1527, the IRAM 30m data used were observed in July and August 2007 with the old ABCD receivers connected to an autocorrelator that provided spectral resolutions of 40 or 80 kHz <cit.>.
The half power beam width (HPBW) of the Yebes 40m telescope is in the range 35-57 ” in the Q band, while that of the IRAM 30m telescope ranges between 21 ” and 34 ” in the 3 mm band. The beam size can be fitted as a function of frequency as HPBW(”) = 1763/ν(GHz) for the Yebes 40m telescope and as HPBW(”) = 2460/ν(GHz) for the IRAM 30m telescope. Therefore, the beam size of the IRAM 30m telescope at 72 GHz is similar to that of the Yebes 40m at 50 GHz. The intensity scale in both the Yebes 40m and IRAM 30m telescopes is antenna temperature, T_A^*, for which we estimate a calibration error of 10 %. To convert antenna temperature into main beam brightness temperature see foot of Table <ref>. All data were analyzed using the program CLASS of the GILDAS software[https://www.iram.fr/IRAMFR/GILDAS/].
§.§ Observational dataset of anions in dark clouds
In Table <ref> we compile the line parameters of all the lines of negative molecular ions detected toward cold dark clouds, including lines from this study and from the literature. The line parameters of C_7N^- observed toward TMC-1 CP are given in <cit.> and are not repeated here. In the case of C_10H^- in TMC-1 CP we do not include line parameters here because the detection by <cit.> is not based on individual lines but on spectral stack of many lines. The lines of molecular anions presented in this study are shown in Fig. <ref> for C_6H^-, Fig. <ref> for C_4H^-, and Fig. <ref> for the remaining anions, i.e., C_8H^-, C_3N^-, and C_5N^-. Since we are interested in the determination of anion-to-neutral abundance ratios, we also need the lines of the corresponding neutral counterpart of each molecular anion, which are the radicals C_4H, C_6H, C_8H, C_3N, and C_5N. The velocity-integrated intensities of the lines of these species are given in Table <ref>.
According to the literature, the most prevalent molecular anion, C_6H^-, has been detected in 11 cold dark clouds: TMC-1 CP <cit.>, L1527 and Lupus-1A <cit.>, L1544 and L1521F <cit.>, and L1495B, L1251A, L1512, L1172, L1389, and TMC-1 C <cit.>. All these detections were based on two individual or stacked lines lying in the frequency range 11-31 GHz (see Table <ref>). Here we present additional lines of C_6H^- in the Q band for TMC-1 CP, Lupus-1A, L1527, L1495B, and L1544, together with the detection of C_6H^- in a new source, L483, through six lines lying in the Q band (see Fig. <ref>).
Molecular anions different to C_6H^- have turned out to be more difficult to detect as they have been only seen in a few sources. For example, C_4H^- has been only detected in three dark clouds, L1527 <cit.>, Lupus-1A <cit.>, and TMC-1 CP <cit.>. These detections rely on one or two lines (see Table <ref>). Here we report the detection of two additional lines of C_4H^- in the Q band toward these three sources, together with the detection of C_4H^- in one new source, L483 (see Fig. <ref>).
The hydrocarbon anion C_8H^- has been observed in two interstellar sources. <cit.> reported the detection of four lines in the 12-19 GHz frequency range toward TMC-1 CP, while <cit.> reported the detection of this anion in Lupus-1A through two stacked lines at 18.7 and 21.0 GHz (see Table <ref>). Thanks to our Yebes 40m data, we present new lines of C_8H^- in the Q band toward TMC-1 CP (see Fig. <ref>).
Finally, the nitrile anions C_3N^- and C_5N^- have resulted to be quite elusive as they have been only seen in one cold dark cloud, TMC-1 CP <cit.>. Here we present the same lines of C_3N^- and C_5N^- reported in <cit.> in the Q band, but with improved signal-to-noise ratios, plus two additional lines of C_3N^- in the 3 mm band. We also present the detection of C_3N^- and C_5N^- in one additional source, Lupus-1A (see Fig. <ref>).
§ PHYSICAL PARAMETERS OF THE SOURCES
The interstellar clouds where molecular anions have been detected are 12 in total and comprise cold dense cores in different evolutionary stages, such as starless, prestellar, and protostellar (see Table <ref>). The classification as protostellar cores is evident in the cases of L1527 and L483 as the targeted positions are those of the infrared sources IRAS 04368+2557 and IRAS 18148-0440, respectively <cit.>. We also classified L1251A, L1172, and L1389 as protostellar sources based on the proximity of an infrared source (L1251A IRS3, CB17 MMS, and IRAS 21017+6742, respectively) to the positions targeted by <cit.>. The differentiation between starless and prestellar core is in some cases more ambiguous. In those cases we followed the criterion based on the N_2D^+/N_2H^+ column density ratio by <cit.>. In any case, for our purposes it is not very important whether a given core is starless or prestellar.
To study the abundance and excitation of molecular anions in these 12 interstellar sources through non-LTE calculations we need to know which are the physical parameters of the clouds, mainly the gas kinetic temperature and the H_2 volume density, but also the emission size of anions and the linewidth. The adopted parameters are summarized in Table <ref>.
Given that C_6H^- has not been mapped in any interstellar cloud to date, it is not known whether the emission of molecular anions in each of the 12 sources is extended compared to the telescope beam sizes, which are in the range 21-67 ” for the Yebes 40m, IRAM 30m, and GBT telescopes at the frequencies targeted for the observations of anions. Therefore one has to rely on maps of related species. In the case of TMC-1 CP we assume that anions are distributed in the sky as a circle with a diameter of 80 ” based on the emission distribution of C_6H mapped by <cit.>. Recent maps carried out with the Yebes 40m telescope <cit.> support the previous results of <cit.>. For the remaining 11 sources, the emission distribution of C_6H is not known and thus we assume that the emission of anions is extended with respect to the telescope beam. This assumption is supported by the extended nature of HC_3N emission in the cases of L1495B, L1251A, L1512, L1172, L1389, and TMC-1 C, according to the maps presented by <cit.>, and of multiple molecular species, including C_4H, in L1544, according to the maps reported by <cit.>.
The linewidth adopted for each source (see Table <ref>) was calculated as the arithmetic mean of the values derived for the lines of C_6H^- in the Q band for TMC-1 CP, Lupus-1A, L1527, L1495B, and L1544. In the case of L483 we adopted the value derived by <cit.> from the analysis of all the lines in the 3 mm band. For L1521F, L1251A, L1512, L1172, and L1389 the adopted linewidths come from IRAM 30m observations of CH_3CCH in the 3 mm band (see Sect. <ref>). Finally, for TMC-1 C we adopted as linewidth that derived for HC_3N by <cit.>.
The gas kinetic temperature was determined for some of the sources from the J = 5-4 and J = 6-5 rotational transitions of CH_3CCH, which lie around 85.4 and 102.5 GHz, respectively. We have IRAM 30m data of these lines for TMC-1 CP, Lupus-1A, L483, L1495B, and L1521F, while for L1527 we used the data obtained with the Nobeyama 45m telescope by <cit.>. Typically, the K = 0, 1, and 2 components are detected, which allow us to use the line intensity ratio between the K = 1 and K = 2 components, belonging to the E symmetry species, to derive the gas kinetic temperature. Since transitions with Δ K 0 are radiatively forbidden, the relative populations of the K = 1 and K = 2 levels are controlled by collisions with H_2 and thus are thermalized at the kinetic temperature of H_2. We do not use the K = 0 component because it belongs to a different symmetry species, A, and interconversion between A and E species is expected to be slow in cold dense clouds and thus their relative populations may not necessarily reflect the gas kinetic temperature.
For TMC-1 CP we derive kinetic temperatures of 8.8 ± 0.6 K and 9.0 ± 0.6 K from the J = 5-4 and J = 6-5 lines of CH_3CCH, respectively. Similarly, using the J = 8-7 through J = 12-11 lines of CH_3C_4H, which lie in the Q band, we derive temperatures of 9.1 ± 0.7 K, 8.7 ± 0.6 K, 9.0 ± 0.6 K, 8.1 ± 0.7 K, and 9.1 ± 0.8 K, respectively. We thus adopt a gas kinetic temperature of 9 K, which is slightly lower than values derived in previous studies, 11.0 ± 1.0 K and 10.1 ± 0.9 K at two positions close to the cyanopolyyne peak using NH_3 <cit.> and 9.9 ± 1.5 K from CH_2CCH <cit.>. In Lupus-1A we derive temperatures of 11.4 ± 1.7 K and 10.2 ± 1.1 K from the J = 5-4 and J = 6-5 lines of CH_3CCH, respectively. We thus adopt a gas kinetic temperature of 11 K, which is somewhat below the value of 14 ± 2 K derived in <cit.> using the K = 0, 1, and 2 components of the J = 5-4 transition of CH_3CCH. In L1527 we derive 13.6 ± 2.5 K and 15.1 ± 2.4 K from the line parameters of CH_3CCH J = 5-4 and J = 6-5 reported by <cit.>. We thus adopt a kinetic temperature of 14 K, which agrees perfectly with the value of 13.9 K derived by <cit.> using CH_3CCH as well. The gas kinetic temperature in L483 has been estimated to be 10 K by <cit.> using NH_3, while <cit.> derive values of 10 K and 15 ± 2 K using either ^13CO or CH_3CCH. A new analysis of the CH_3CCH data of <cit.> in which the weak K = 3 components are neglected and only the K = 1 and K = 2 components are used results in kinetic temperatures of 11.5 ± 1.1 K and 12.6 ± 1.5 K, depending on whether the J = 5-4 or J = 6-5 transition is used. We thus adopt a kinetic temperature of 12 K for L483. For L1495B we derive 9.1 ± 0.9 K and 9.2 ± 0.7 K from CH_3CCH J = 5-4 and J = 6-5, and we thus adopt a kinetic temperature of 9 K. In L1521F we also adopt a gas kinetic temperature of 9 K since the derived temperatures from CH_3CCH J = 5-4 and J = 6-5 are 9.0 ± 0.7 K and 8.9 ± 0.9 K. The value agrees well with the temperature of 9.1 ± 1.0 K derived by <cit.> using NH_3. For the remaining cores, the gas kinetic temperatures were taken from the literature, as summarized in Table <ref>.
To estimate the volume density of H_2 we used the ^13C isotopologues of HC_3N when these data were available. We have Yebes 40m data of the J = 4-3 and J = 5-4 lines of H^13CCCN, HC^13CCN, and HCC^13CN for TMC-1 CP, Lupus-1A, L1527, and L483. Data for one or various lines of these three isotopologues in the 3 mm band are also available from the IRAM 30m telescope (see Sect. <ref>) or from the Nobeyama 45 telescope (for L1527; see ). Using the ^13C isotopologues of HC_3N turned out to constrain much better the H_2 density that using the main isotopologue because one gets rid of optical depth effects. We carried out non-LTE calculations under the Large Velocity Gradient (LVG) formalism adopting the gas kinetic temperature and linewidth given in Table <ref> and varying the column density of the ^13C isotopologue of HC_3N and the H_2 volume density. As collision rate coefficients we used those calculated by <cit.> for HC_3N with ortho and para H_2, where we adopted a low ortho-to-para ratio of H_2 of 10^-3, which is theoretically expected for cold dark clouds (e.g., ). The exact value of the ortho-to-para ratio of H_2 is not very important as long as the para form is well in excess of the ortho form, so that collisions with para H_2 dominate. The best estimates for the column density of the ^13C isotopologue of HC_3N and the volume density of H_2 are found by minimizing χ^2, which is defined as
χ^2 = ∑_i=1^N_l[ (I_calc - I_obs)/σ]^2,
where the sum extends over the N_l lines available, I_calc and I_obs are the calculated and observed velocity-integrated brightness temperatures, and σ are the uncertainties in I_obs, which include the error given by the Gaussian fit and the calibration error of 10 %. To evaluate the goodness of the fit, we use the reduced χ^2, which is defined as χ^2_red = χ^2_min/(N_l-p), where χ^2_min is the minimum value of χ^2 and p is the number of free parameters. Typically, a value of χ^2_red ≲ 1 indicates a good quality of the fit. In this case we have p = 2 because there are two free parameters, the column density of the ^13C isotopologue of HC_3N and the H_2 volume density. Errors in these two parameters are given as 1 σ, where for p = 2, the 1 σ level (68 % confidence) corresponds to χ^2+2.3. The same statistical analysis is adopted in Sect. <ref> when studying molecular anions and their neutral counterparts through the LVG method. In some cases in which the number of lines is small or the H_2 density is poorly constrained, the H_2 volume density is kept fixed. In those cases p = 1 and the 1 σ error (68 % confidence) in the column density is given by χ^2+1.0.
In Fig. <ref> we show the results for TMC-1 CP. In this starless core the H_2 volume density is well constrained by the four available lines of the three ^13C isotopologues of HC_3N to a narrow range of (0.9-1.1) × 10^4 cm^-3 with very low values of χ^2_red. We adopt as H_2 density in TMC-1 CP the arithmetic mean of the values derived for the three isotopologues, i.e., 1.0 × 10^4 cm^-3 (see Table <ref>). Similar calculations allow to derive H_2 volume densities of 1.8 × 10^4 cm^-3 for Lupus-1A, 5.6 × 10^4 cm^-3 for L483, and a lower limit of 10^5 cm^-3 for L1527 (see Table <ref>). The value for L483 is of the same order than those derived in the literature, 3.4 × 10^4 cm^-3 from the model of <cit.> and 3 × 10^4 cm^-3, from either NH_3 <cit.> or CH_3OH <cit.>. For L1495B we could only retrieve data for one of the ^13C isotopologues of HC_3N, HCC^13CN, from which we derive a H_2 density of 1.6 × 10^4 cm^-3 (see Table <ref>). In the case of L1521F, ^13C isotopologues of HC_3N were not available and thus we used lines of HCCNC, adopting the collision rate coefficients calculated by <cit.>, to derive a rough estimate of the H_2 volume density of 1 × 10^4 cm^-3 (see Table <ref>). Higher H_2 densities, in the range (1-5) × 10^5 cm^-3, are derived for L1521F from N_2H^+ and N_2D^+ <cit.>, probably because these molecules trace the innermost dense regions depleted in CO.
For the remaining sources we adopted H_2 volume densities from the literature (see Table <ref>). For L1544 we adopted a value of 2 × 10^4 cm^-3 from the analysis of SO and SO_2 lines by <cit.>. This H_2 density is in agreement with the range of values, (1.5-4.0) × 10^4 cm^-3, found by <cit.> in their excitation analysis of HCCNC and HNC_3. Note that H_2 volume densities toward the dust peak are larger than 10^6 cm^-3. However, as shown by <cit.>, the emission of C_4H probes the outer shells and thus a density of a few 10^4 cm^-3 is appropriate for our calculations toward the CH_3OH peak. In the cases of L1251A, L1512, L1172, L1389, and TMC-1 C, we adopted the H_2 densities from the analysis of HC_3N lines by <cit.>. The reliability of the H_2 volume densities derived by these authors is supported by the fact that the densities they derive for TMC-1 CP and L1495B, 1.0 × 10^4 cm^-3 and 1.1 × 10^4 cm^-3, respectively, are close to the values determined in this study from ^13C isotopologues of HC_3N (see Table <ref>).
In spite of the different evolutionary status of the 12 anion-containing clouds, the gas kinetic temperatures and H_2 volume densities at the scales proven by the Yebes 40m, IRAM 30m, and GBT telescopes are not that different. Gas temperatures are restricted to the very narrow range 9-14 K, while H_2 densities are in the range (1.0-7.5) × 10^4 cm^-3, at the exception of L1527 which has an estimated density in excess of 10^5 cm^-3 (see Table <ref>).
§ EXCITATION OF ANIONS: GENERAL CONSIDERATIONS
One may expect that given the large dipole moments of molecular anions, as high as 10.4 D in the case of C_8H^- <cit.>, the rotational levels should be populated out of thermodynamic equilibrium in cold dark clouds. This is not always the case as it will be shown here. To get insight into the excitation of negative molecular ions in interstellar clouds we run non-LTE calculations under the LVG formalism adopting typical parameters of cold dark clouds, i.e., a gas kinetic temperature of 10 K, a column density of 10^11 cm^-2 (of the order of the values typically derived for anions in cold dark clouds; see references in Sect. <ref>), and a linewidth of 0.5 km s^-1 (see Table <ref>), and we varied the volume density of H_2 between 10^3 and 10^6 cm^-3. The sets of rate coefficients for inelastic collisions with H_2 adopted are summarized in Table <ref>. In those cases in which only collisions with He are available we scaled the rate coefficients by multiplying them by the square root of the ratio of the reduced masses of the H_2 and He colliding systems. When inelastic collisions for ortho and para H_2 are available, we adopted a ortho-to-para ratio of H_2 of 10^-3.
In Fig. <ref> we show the calculated excitation temperatures (T_ ex) of lines of molecular anions as a function of the quantum number J of the upper level and the H_2 volume density. The different panels correspond to different anions and show the regimes in which lines are either thermalized (T_ ex ∼ 10 K) of subthermally excited (T_ ex < 10 K). To interpret these results it is useful to think in terms of the critical density, which for a given rotational level can be evaluated as the ratio of the de-excitation rates due to spontaneous emission and due to inelastic collisions (e.g., ). Collision rate coefficients for transitions with Δ J = -1 or -2, which are usually the most efficient, are of the order of 10^-10 cm^3 s^-1 at a temperature of 10 K for the anions for which calculations have been carried out (see Table <ref>). The Einstein coefficient for spontaneous emission depends linearly on the square of the dipole moment and the cube of the frequency. Therefore, the critical density (and thus the degree of departure from LTE) is very different depending on the dipole moment of the anion and on the frequency of the transition. Regarding the dependence of the critical density on the dipole moment, C_2H^- and CN^- have a similar weight, and thus their low-J lines, which are the ones observable for cold clouds, have similar frequencies. However, these two anions have quite different dipole moments, 3.1 and 0.65 Debye, respectively <cit.>, which make them to show a different excitation pattern. As seen in Fig. <ref>, the low-J lines of CN^- are in LTE at densities above 10^5 cm^-3 while those of C_2H^- require much higher H_2 densities to be in LTE. With respect to the dependence of the critical density with frequency, as one moves along the series of increasing weight C_2H^- → C_4H^- → C_6H^- or CN^- → C_3N^- → C_5N^- (see Fig. <ref>), the most favorable lines for detection in cold clouds (those with upper level energies around 10 K) shift to lower frequencies, which make the Einstein coefficients, and thus the critical densities, to decrease. That is, the lines of anions targeted by radiotelescopes are more likely to be thermalized for heavy anions than for light ones (see the higher degree of thermalization when moving from lighter to heavier anions in Fig. <ref>).
The volume densities of H_2 in cold dark clouds are typically in the range 10^4-10^5 cm^-3 (see Table <ref>). Therefore, if C_2H^- is detected in a cold dark cloud at some point in the future, the most favorable line for detection, the J = 1-0, would be most likely subthermally excited, making necessary to use the collision rate coefficients to derive a precise abundance. In the case of a potential future detection of CN^- in a cold interstellar cloud, the J = 1-0 line would be in LTE only if the H_2 density of the cloud is ≥ 10^5 cm^-3 and out of LTE for lower densities (see Fig. <ref>). The medium-sized anions C_4H^- and C_3N^- are predicted to have their Q band lines more or less close to LTE depending on whether the H_2 density is closer to 10^5 or to 10^4 cm^-3, while the lines in the 3 mm band are likely to be subthermally excited unless the H_2 density is above 10^5 cm^-3 (see Fig. <ref>). For the heavier anions C_6H^- and C_5N^-, the lines in the K band are predicted to be thermalized at the gas kinetic temperature, while those in the Q band may or may not be thermalized depending on the H_2 density (see Fig. <ref>). Comparatively, the Q band lines of C_5N^- are more easily thermalized than those of C_6H^- because C_5N^- has a smaller dipole moment than C_6H^-. We note that the results concerning C_5N^- have to be taken with caution because we used the collision rate coefficients calculated for C_6H^- in the absence of specific collision data for C_5N^- (see Table <ref>). We did similar calculations for C_8H^-, C_10H^-, and C_7N^- (not shown) using the collision rate coefficients of C_6H^-. We find that the lines in a given spectral range deviate more from thermalization as the size of the anion increases. In the K band, the lines of C_6H^- and C_5N^- are thermalized, while those of C_10H^- become subthermally excited at low densities, around 10^4 cm^-3. In the Q band the deviation from thermalization is even more marked for these large anions.
In summary, non-LTE calculations are particularly important to derive accurate abundances for anions when just one or two lines are detected and these lie in a regime of subthermal excitation, as indicated in Fig. <ref>. This becomes critical, in order of decreasing importance, for C_2H^-, CN^-, C_4H^-, C_3N^-, C_6H^-, C_8H^-, and C_5N^- (for the three latter only if observed at frequencies above 30 GHz). The drawback is that the H_2 volume density must be known with a good precision if one aims at determining the anion column density accurately with only one or two lines.
In the case of the neutral counterparts of molecular anions, collision rate coefficients have been calculated for C_6H and C_3N with He as collider <cit.>. We thus carried out LVG calculations similar to those presented before for anions. In this case we adopt a higher column density of 10^12 cm^-2, in line with typical values in cold dark clouds (see references in Sect. <ref>). The results are shown in Fig. <ref>. It is seen that in the case of C_3N, the excitation pattern is similar to that of the corresponding anion, C_3N^-, shown in Fig. <ref>. The thermalization of C_3N occurs at densities somewhat higher compared to C_3N^-, mainly because the collision rate coefficients calculated for C_3N with He <cit.> are smaller than those computed for C_3N^- with para H_2 <cit.>. We note that this conclusion may change if the collision rate coefficients of C_3N with H_2 are significantly larger than the factor of 1.39 due to the change in the reduced mass when changing He by H_2. In the case of C_6H however the excitation behavior is very different to that of C_6H^- (compare C_6H^- in Fig. <ref> with C_6H in Fig. <ref>). The rotational levels of the radical are much more subthermally excited than those of the corresponding anion, with a difference in the critical density of about a factor of 30. This is a consequence of the much smaller collision rate coefficients calculated for C_6H with He <cit.> compared to those calculated for C_6H^- with para H_2 <cit.>, a difference that is well beyond the factor of 1.40 due to the change in the reduced mass when changing He by H_2.
§ ANION ABUNDANCES
We evaluated the column densities of molecular anions and their corresponding neutral counterparts in the 12 studied sources by carrying out LVG calculations similar to those described in Sect. <ref> for the ^13C isotopologues of HC_3N. We used the collision rate coefficients given in Table <ref>. Gas kinetic temperatures and linewidths were fixed to the values given in Table <ref>, the ortho-to-para ratio of H_2, when needed, was fixed to 10^-3, and both the column density of the species under study and the H_2 volume density were varied. The best estimates for these two parameters were found by minimization of χ^2 (see Sect. <ref>). In addition, to evaluate the rotational temperature, and thus the level of departure from LTE, and to have an independent estimate of the column density, we constructed rotation diagrams.
The LVG method should provide a more accurate determination of the column density than the rotation diagram, as long as the collision rate coefficients with para H_2 and the gas kinetic temperature are accurately known. If an independent determination of the H_2 volume density is available from some density tracer (in our case the ^13C isotopologues of HC_3N are used in several sources), a good agreement between the values of n(H_2) obtained from the species under study and from the density tracer supports the reliability of the LVG analysis. We note that densities do not need to be similar if the species studied and the density tracer are distributed over different regions, although in our case we expect similar distributions for HC_3N, molecular anions, and their neutral counterparts, as long as all them are carbon chains. A low value of χ^2_red, typically ≲ 1, is also indicative of the goodness of the LVG analysis. If the quality of the LVG analysis is not satisfactory or the collision rate coefficients are not accurate, a rotation diagram may still provide a good estimate of the column density if the number of detected lines is high enough and they span a wide range of upper level energies. Therefore, a high number of detected lines makes likely to end up with a correct determination of the column density. On the other hand, if only one or two lines are detected, the accuracy with which the column density can be determined relies heavily on whether the H_2 volume density, in the case of an LVG calculation, or the rotational temperature, in the case of the rotation diagram, are known with some confidence.
In Table <ref> we present the results from the LVG analysis and the rotation diagram for all molecular anions detected in cold dark clouds and for the corresponding neutral counterparts, and compare the column densities derived with values from the literature, when available. In general, the column densities derived through the rotation diagram agree within 50 %, with those derived by the LVG analysis. The sole exceptions are C_8H in TMC-1 CP and C_6H in TMC-1 C. In the former case, the lack of specific collision rate coefficients for C_8H probably introduces an uncertainty in the determination of the column density. In the case of C_6H in TMC-1 C, the suspected problem in the collision rate coefficients used for C_6H (see below) is probably behind the too large column density derived by the LVG method.
We first discuss the excitation and abundance analyses carried out for negative ions. For the anions detected in TMC-1 CP through more than two lines, i.e., C_6H^-, C_8H^-, C_3N^-, and C_5N^-, the quality of the LVG analysis is good (in Fig. <ref> we show the case of C_3N^-). First, the number of lines available is sufficiently high and they cover a wide range of upper level energies. Second, the values of χ^2_ red are ≲ 1. And third, the H_2 densities derived are on the same order (within a factor of two) of that obtained through ^13C isotopologues of HC_3N. The rotational temperatures derived by the rotation diagram indicate subthermal excitation, which is consistent with the H_2 densities derived and the excitation analysis presented in Sect. <ref>. We note that the column densities derived by the rotation diagram are systematically higher, by ∼ 50 %, compared to those derived through the LVG analysis. These differences are due to the breakdown of various assumptions made in the frame of the rotation diagram method, mainly the assumption of a uniform excitation temperature across all transitions and the validity of the Rayleigh-Jeans limit. Only the assumption that exp(hν/kT_ex) - 1 = hν/kT_ex, implicitly made by the rotation diagram method in the Rayleigh-Jeans limit, already implies errors of 10-20 % in the determination of the column density for these anions. We therefore adopt as preferred values for the column densities those derived through the LVG method and assign an uncertainty of 15 %, which is the typical statistical error in the determination of the column density by the LVG analysis. The recommended values are given in Table <ref>. Based on the same arguments, we conclude that the LVG analysis is satisfactory for C_6H^- and C_5N^- in Lupus-1A , C_6H^- and C_4H^- in L1527, and C_6H^- in L483, and thus adopt the column densities derived by the LVG method with the same estimated uncertainty of 15 % (see Table <ref>). In other cases the LVG analysis is less reliable due to a variety of reasons: only one or two lines are available (C_4H^- in TMC-1 CP, C_8H^- and C_3N^- in Lupus-1A, C_4H^- in L483, and C_6H^- in the clouds L1521F, L1251A, L1512, L1172, L1389, and TMC-1 C), the parameter χ^2_ red is well above unity (C_4H^- in Lupus-1A), or the column density has a sizable error (C_6H^- in L1495B and L1544). In those cases we adopt the column densities derived by the LVG method but assign a higher uncertainty of 30 % (values are given in Table <ref>).
In order to derive anion-to-neutral abundance ratios, we applied the same analysis carried out for the anions to the corresponding neutral counterparts. We first focus on the radical C_6H. There is one striking issue in the LVG analysis carried out for this species: the H_2 volume densities derived through C_6H are systematically higher, by 1-2 orders of magnitude, than those derived through the ^13C isotopologues of HC_3N (see Fig. <ref>). This fact, together with the previous marked difference in the excitation pattern compared to that of C_6H^- discussed in Sect. <ref>, suggests that the collision coefficients adopted for C_6H, which are based on the C_6H – He system studied by <cit.>, are too small. A further problem when using the collision coefficients of <cit.> is that the line intensities from the ^2Π_1/2 state, which in TMC-1 CP are around 100 times smaller than those of the ^2Π_3/2 state, are overestimated by a factor of ∼ 10. All these issues indicate that it is worth to undertake calculations of the collision rate coefficients of C_6H with H_2. The suspected problem in the collision rate coefficients of C_6H make us to adopt a conservative uncertainty of 30 % in the column densities derived. Moreover, in those sources in which C_6H is observed through just a few lines (L1521F, L1251A, L1512, L1172, L1389, and TMC-1 C) we need to fix the H_2 density to the values derived through other density tracer (see Table <ref>), and given the marked difference between the H_2 densities derived through C_6H and other density tracers, it is likely that the C_6H column densities derived by the LVG method are unreliable. In these cases we therefore adopted as preferred C_6H column densities those obtained from the rotation diagram (see Table <ref>). For the other neutral radicals, we adopted the column densities derived by the LVG method with an estimated uncertainty of 15 % when the LVG analysis was satisfactory (C_3N and C_5N in TMC-1 CP, C_4H, C_3N, and C_5N in Lupus-1A, and C_4H in L1527) and a higher uncertainty of 30 % otherwise (C_4H and C_8H in TMC-1 CP, C_8H in Lupus-1A, and C_4H in L483).
The recommended column densities for molecular anions and their neutral counterparts, and the corresponding anion-to-neutral ratios, are given in Table <ref>. Since the lines of a given anion and its corresponding neutral counterpart where in most cases observed simultaneously, we expect the error due to calibration to cancel when computing anion-to-neutral ratios. We therefore subtracted the 10 % error due to calibration in the column densities when computing errors in the anion-to-neutral ratios. In general, the recommended anion-to-neutral abundance ratios agree within 50 % with the values reported in the literature, when available. Higher differences, of up to a factor of two, are found for C_6H^- in L1527 and L1495B and for C_5N^- in TMC-1 CP. The most drastic differences are found for the C_4H^-/C_4H abundance ratio, for which we derive values much higher than those reported in the literature. The differences are largely due to the fact that here we adopt a revised value of the dipole moment of C_4H (2.10 D; ), which is significantly higher than the value of 0.87 D calculated by <cit.> and adopted in previous studies. This fact makes the column densities of C_4H to be revised downward by a factor of ∼ 6, and consequently the C_4H^-/C_4H ratios are also revised upward by the same factor.
§ DISCUSSION
Having at hand a quite complete observational picture of negative ions in the interstellar medium, as summarized in Table <ref>, it is interesting to examine which lessons can be learnt from this. There are at least two interesting aspects to discuss. First, how do the anion-to-neutral abundance ratio behave from one source to another, and whether the observed variations can be related to some property of the cloud. And second, within a given source, how do the anion-to-neutral abundance ratio vary for the different anions, and whether this can be related to the formation mechanism of anions.
Regarding the first point, since C_6H^- is the most widely observed anion, it is very convenient to focus on it to investigate the source-to-source behavior of negative ions. The detection of C_6H^- in L1527 and the higher C_6H^-/C_6H ratio derived in that source compared to that in TMC-1 CP led <cit.> to suggest that this was a consequence of the higher H_2 density in L1527 compared to TMC-1 CP. This point was later on revisited by <cit.> with a larger number of sources detected in C_6H^-. These authors found a trend in which the C_6H^-/C_6H ratio increases with increasing H_2 density and further argued that this ratio increases as the cloud evolves from quiescent to star-forming, with ratios below 3 % in quiescent sources and above that value in star-forming ones.
There are theoretical grounds that support a relationship between the C_6H^-/C_6H ratio and the H_2 density. Assuming that the formation of anions is dominated by radiative electron attachment to the neutral counterpart and that they are mostly destroyed through reaction with H atoms, as expected for the conditions of cold dense clouds <cit.>, it can be easily shown that at steady state the anion-to-neutral abundance ratio is proportional to the abundance ratio between electrons and H atoms, which in turn is proportional to the square root of the H_2 volume density (e.g., ). That is,
C_6H^-/C_6H∝e^-/H∝n(H_2)^1/2.
In Fig. <ref> we plot the observed C_6H^-/C_6H ratio as a function of the H_2 density for the 12 clouds where this anion has been detected. This is an extended and updated version of Figure 5 of <cit.>, where we superimpose the theoretical trend expected according to Eq. (<ref>). In general terms, the situation depicted by Fig. <ref> is not that different from that found by <cit.>. The main difference concerns L1495B, for which we derive a higher C_6H^-/C_6H ratio, 3.0 % instead of 1.4 %. Our value should be more accurate, given the larger number of lines used here. Apart from that, the C_6H^-/C_6H ratio tends to be higher in those sources with higher H_2 densities, which tend to be more evolved. This behavior is similar to that found by <cit.>. The data points in Fig. <ref> seem to be consistent with the theoretical expectation. We however caution that there is substantial dispersion in the data points. Moreover, the uncertainties in the anion-to-neutral ratios, together with those affecting the H_2 densities (not shown), make it difficult to end up with a solid conclusion on whether or not observations follow the theoretical expectations. If we restrict to the five best characterized sources (TMC-1 CP, Lupus-1A, L1527, L483, and L1495B), all them observed in C_6H^- through four or more lines and studied in the H_2 density in a coherent way, then the picture is such that all sources, regardless of its H_2 density, have similar C_6H^-/C_6H ratios, at the exception of L1527, which remains the only data point supporting the theoretical relation between anion-to-neutral ratio and H_2 density. It is also worth noting that when looking at C_4H^-, L1527 shows also an enhanced anion-to-neutral ratio compared to TMC-1 CP, Lupus-1A, and L483. Further detections of C_6H^- in sources with high H_2 densities, preferably above 10^5 cm^-3, should help to shed light on the suspected relation between anion-to-neutral ratio and H_2 density. This however may not be easy because chemical models predict that, although the C_6H^-/C_6H ratio increases with increasing H_2 density, an increase in the density also brings a decrease in the column density of both C_6H and C_6H^- <cit.>.
The second aspect that is worth to discuss is the variation of the anion-to-neutral ratio for different anions within a given source. Unlike the former source-to-source case, where variations were small (a factor of two at most), here anion-to-neutral ratios vary by orders of magnitude, i.e., well above uncertainties. Figure <ref> summarizes the observational situation of interstellar anions in terms of abundances relative to their neutral counterpart. The variation of the anion-to-neutral ratios across different anions is best appreciated in TMC-1 CP and Lupus-1A, which stand out as the two most prolific sources of interstellar anions. The lowest anion-to-neutral ratio is reached by far for C_4H^-, while the highest values are found for C_5N^- and C_8H^-. We caution that the C_5N^-/C_5N ratio could have been overestimated if the true dipole moment of C_5N is a mixture between those of the ^2Σ and ^2Π states, as discussed by <cit.>, in a case similar to that studied for C_4H by <cit.>. For the large anion C_7N^-, the anion-to-neutral ratio is not known in TMC-1 CP but it is probably large, as suggested by the detection of the lines of the anion and the non detection of the lines of the neutral <cit.>. In the case of the even larger anion C_10H^-, the anion is found to be even more abundant than the neutral in TMC-1 CP by a factor of two, although this result has probably an important uncertainty since the detection is done by line stack <cit.>. Moreover, it is yet to be confirmed that the species identified is C_10H^- and not C_9N^- <cit.>. In any case, a solid conclusion from the TMC-1 CP and Lupus-1A data shown in Fig. <ref> is that when looking at either the hydrocarbon series of anions or at the nitrile series, the anion-to-neutral ratio clearly increases with increasing size. The most straitforward interpretation of this behavior is related to the formation mechanism originally proposed by <cit.>, which relies on the radiative electron attachment (REA) to the neutral counterpart and for which the rate coefficient is predicted to increase markedly with increasing molecular size.
If electron attachment is the dominant formation mechanism of anions and destruction rates are similar for all anions, we expect the anion-to-neutral abundance ratio to be proportional to the rate coefficient of radiative electron attachment. That is,
A^-/A∝k_REA,
where A^- and A are the anion and its corresponding neutral counterpart, respectively, and k_ REA is the rate coefficient for radiative electron attachment to A.
To get insight into this relation we plot in Fig. <ref> the rate coefficients calculated for the reactions of electron attachment forming the different anions on a scale designed on purpose to visualize if observed anion-to-neutral ratios scale with calculated electron attachment rates. We arbitrarily choose C_6H^- as the reference for the discussion. If we first focus on the largest anion C_8H^-, we see that the C_8H^-/C_8H ratios are systematically higher, by a factor of 2-3, than the C_6H^-/C_6H ones, while <cit.> calculate identical electron attachment rates for C_6H and C_8H. Similarly, the C_5N^-/C_5N ratios are higher, by a factor of 6-8 than the C_6H^-/C_6H ratios, while the electron attachment rate calculated for C_5N is twice of that computed for C_6H in the theoretical scenario of <cit.>. That is, for the large anions C_8H^- and C_5N^- there is a deviation of a factor of 2-4 from the theoretical expectation given by Eq. (<ref>). This deviation is small given the various sources of uncertainties in both the observed anion-to-neutral ratio (mainly due to uncertainties in the dipole moments) and the calculated electron attachment rate coefficient. The situation is different for the medium size anions C_4H^- and C_3N^-. In the case of C_4H^-, anion-to-neutral ratios are ∼ 100 times lower than for C_6H^-, while the electron attachment rate calculated for C_4H is just ∼ 6 times lower than that computed for C_6H. The deviation from Eq. (<ref>) of a factor ∼ 20, which is significant, is most likely due to the electron attachment rate calculated for C_4H by <cit.> being too large. In the case of C_3N^-, the observed anion-to-neutral ratios are 4-6 times lower than those derived for C_6H^-, while the electron attachment rate calculated by <cit.> for C_3N is 300 times lower than that computed for C_6H by <cit.>. Here the deviation is as large as two orders of magnitude and it is probably caused by the too low electron attachment rate calculated for C_3N. In summary, calculated electron attachment rates are consistent with observed anion-to-neutral ratios for the large species but not for the medium-sized species C_4H and C_3N, in which cases calculated rates are too large by a factor of ∼ 20 and too small by a factor of ∼ 100, respectively.
Of course, the above conclusion holds in the scenario of anion formation dominated by electron attachment and similar destruction rates for all anions, which may not be strictly valid. For example, it has been argued <cit.> that the process of radiative electron attachment is much less efficient than calculated by <cit.>, with rate coefficients that are too small to sustain the formation of anions in interstellar space. <cit.> discuss this point making the difference between direct and indirect radiative electron attachment, where for long carbon chains the direct process would be slow, corresponding to the rates calculated by <cit.>, while the indirect process could be fast if a long-lived superexcited anion is formed, something that has some experimental support. <cit.> conclude that there are enough grounds to support rapid electron attachment to large carbon chains, as calculated by <cit.>. The formation mechanism of anions through electron attachment is very selective for large species and thus has the advantage of naturally explaining the marked dependence of anion-to-neutral ratios with molecular size illustrated in Fig. <ref>, something that would be difficult to explain through other formation mechanism. Indeed, mechanisms such as dissociative electron attachment to metastable isomers such as HNC_3 and H_2C_6 <cit.> or reactions of H^- with polyynes and cyanopolyynes <cit.> could contribute to some extent but are unlikely to control the formation of anions since they can hardly explain why large anions are far more abundant than small ones.
§ CONCLUSIONS
We reported new detections of molecular anions in cold dense clouds and considerably expanded the number of lines through which negative ions are detected in interstellar clouds. The most prevalent anion remains to be C_6H^-, which to date has been seen in 12 interstellar clouds, while the rest of interstellar anions are observed in just 1-4 sources.
We carried out excitation calculations, which indicate that subthermal excitation is common for the lines of interstellar anions observed with radiotelescopes, with the low frequency lines of heavy anions being the easiest to thermalize. Important discrepancies between calculations and observations are found for the radical C_6H, which suggest that the collision rate coefficients currently available for this species need to be revisited.
We analyzed all the observational data acquired here and in previous studies through non-LTE LVG calculations and rotation diagrams to constrain the column density of each anion in each source. Differences in the anion-to-neutral abundance ratios with respect to literature values are small, less than 50 % in general and up to a factor of two in a few cases. The highest difference is found for the C_4H^-/C_4H ratio, which is shifted upward with respect to previous values due to the adoption of a higher dipole moment for the radical C_4H.
The observational picture of interstellar anions brought by this study shows two interesting results. On the one side, the C_6H^-/C_6H ratio seems to be higher in clouds with a higher H_2 density, which is usually associated to a later evolutionary status of the cloud, although error bars make it difficult to clearly distinguish this trend. On the other hand, there is a very marked dependence of the anion-to-neutral ratio with the size of the anion, which is in line with the formation scenario involving radiative electron attachment, the theory of which must still be revised for medium size species such as C_4H and C_3N.
We acknowledge funding support from Spanish Ministerio de Ciencia e Innovación through grants PID2019-106110GB-I00, PID2019-107115GB-C21, and PID2019-106235GB-I00.
[Agúndez et al.(2008)]Agundez2008 Agúndez, M., Cernicharo, J., Guélin, M., et al. 2008, , 478, L19
[Agúndez et al.(2010)]Agundez2010 Agúndez, M., Cernicharo, J., Guélin, M., et al. 2010, , 517, L2
[Agúndez et al.(2015)]Agundez2015 Agúndez, M., Cernicharo, J., & Guélin, M. 2015, , 577, L5
[Agúndez et al.(2019)]Agundez2019 Agúndez, M., Marcelino, N., Cernicharo, J., et al. 2019, , 625, A147
[Agúndez et al.(2022)]Agundez2022 Agúndez, M., Marcelino, N., Cabezas, C., et al. 2022, , 657, A96
[Agúndez et al.(2023)]Agundez2023 Agúndez, M., Roncero, O., Marcelino, N., et al. 2023, , in press
[Alexander(1982)]Alexander1982 Alexander, M. H. 1982, , 76, 5974
[Alexander et al.(1986)]Alexander1986 Alexander, M. H., Smedley, J. E., & Corey, G. C. 1986, , 84, 3049
[Anglada et al.(1997)]Anglada1997 Anglada, G., Sepúlveda, I., & Gómez, J. F. 1997, , 121, 255
[Bacmann et al.(2002)]Bacmann2002 Bacmann, A., Lefloch, B., Ceccarelli, C., et al. 2002, , 389, L6
[Balança et al.(2021)]Balanca2021 Balança, C., Quintas-Sánchez, E., Dawes, R., et al. 2021, , 508, 1148
[Biswas et al.(2023)]Biswas2023 Biswas, R., Giri, K., González-Sánchez, L. et al. 2023, , 522, 5775
[Blanksby et al.(2001)]Blanksby2001 Blanksby, S. J., McAnoy, A. M., Dua, S., & Bowie, J. H. 2001, , 328, 89
[Bop et al.(2021)]Bop2021 Bop, C. T., Lique, F., Faure, A., et al. 2021, , 501, 1911
[Bop et al.(2022)]Bop2022 Bop, C. T., Desrousseaux, B., & Lique, F. 2022, , 662, A102
[Botschwina et al.(1995)]Botschwina1995 Botschwina, P., Seeger, S., Mladenovic, M., et al. 1995, , 14, 169
[Botschwina(2000)]Botschwina2000 Botschwina, P. 2000, 55th Ohio Symposium on Molecular Spectroscopy, TC06
[Botschwina & Oswald(2008)]Botschwina2008 Botschwina, P. & Oswald, R. 2008, , 129, 044305
[Brünken et al.(2007a)]Brunken2007a Brünken, S., Gupta, H., Gottlieb, C. A., et al. 2007a, , 664, L43
[Brünken et al.(2007b)]Brunken2007b Brünken, S., Gottlieb, C. A., Gupta, H., et al. 2007b, , 464, L33
[Cabezas et al.(2021)]Cabezas2021 Cabezas, C., Agúndez, M., Marcelino, N., et al. 2021, , 654, A45
[Cabezas et al.(2022)]Cabezas2022 Cabezas, C., Agúndez, M., Marcelino, N., et al. 2022, , 657, L4
[Carelli et al.(2013)]Carelli2013 Carelli, F., Satta, M., Grassi, T., & Gianturco, F. A. 2013, , 774, 97
[Cernicharo et al.(2007)]Cernicharo2007 Cernicharo, J., Guélin, M., Agúndez, M., et al. 2007, , 467, L37
[Cernicharo et al.(2008)]Cernicharo2008 Cernicharo, J., Guélin, M., Agúndez, M., et al. 2008, , 688, L83
[Cernicharo et al.(2012)]Cernicharo2012 Cernicharo, J., Marcelino, N., Roueff, E., et al. 2012, , 759, L43
[Cernicharo et al.(2020)]Cernicharo2020 Cernicharo, J., Marcelino, N., Pardo, J. R., et al. 2020, , 641, L9
[Cernicharo et al.(2021)]Cernicharo2021 Cernicharo, J., Agúndez, M., Kaiser, R. I., et al. 2021, , 652, L9
[Cernicharo et al.(2023a)]Cernicharo2023a Cernicharo, J., Pardo, J. R., Cabezas, C., et al. 2023a, , 670, L19
[Cernicharo et al.(2023b)]Cernicharo2023b Cernicharo, J., Tercero, B., Marcelino, N., et al. 2023b, , submitted
[Codella et al.(1997)]Codella1997 Codella, C., Welser, R., Henkel, C., et al. 1997, , 324, 203
[Cordiner et al.(2011)]Cordiner2011 Cordiner, M. A., Charnley, S. B., Buckle, J. V., et al. 2011, , 730, L18
[Cordiner & Charnley(2012)]Cordiner2012 Cordiner, M. A. & Charnley, S. B. 2012, , 749, 120
[Cordiner et al.(2013)]Cordiner2013 Cordiner, M. A., Buckle, J. V., Wirström, E. S., et al. 2013, , 770, 48
[Crapsi et al.(2005)]Crapsi2005 Crapsi, A., Caselli, P., Walmsley, C. M., et al. 2005, , 619, 379
[Douguet et al.(2015)]Douguet2015 Douguet, N., Fonseca dos Santos, S., Raoult, M., et al. 2015, , 142, 234309
[Dumouchel et al.(2012)]Dumouchel2012 Dumouchel, F., Spielfiedel, A., Senent, M. L., & Feautrier, N. 2012, , 533, 6
[Dumouchel et al.(2023)]Dumouchel2023 Dumouchel, F., Quintas-Sánchez, E., Balança, C., et al. 2023, , 158, 164307
[Faure et al.(2016)]Faure2016 Faure, A., Lique, A., & Wiesenfeld, L. 2016, , 460, 2103
[Fehér et al.(2016)]Feher2016 Fehér, O., Tóth, L. V., Ward-Thompson, D., et al. 2016, , 590, A75
[Flower et al.(2006)]Flower2006 Flower, D. R., Pineau des Forêts, G., & Walmsley, C. M. 2006, , 449, 621
[Flower et al.(2007)]Flower2007 Flower, D. R., Pineau des Forêts, G., & Walmsley, C. M. 2007, , 474, 923
[Forer et al.(2023)]Forer2023 Forer, J., Kokoouline, V., & Stoecklin, T. 2023, , 107, 043117
[Fossé et al.(2001)]Fosse2001 Fossé, D., Cernicharo, J., Gerin, M., & Cox, P. 2001, , 552, 168
[Franz et al.(2020)]Franz2020 Franz, J., Mant, B. P., González-Sánchez, L., et al. 2020, , 152, 234303
[Frayer et al.(2018)]Frayer2018 Frayer, D. T., Ghigo, F., & Maddalena, R. J. 2018, GBT Memo #301
[Gianturco et al.(2016)]Gianturco2016 Gianturco, F. A., Satta, M., Mendolicchio, M., et al. 2016, , 830, 2
[Gianturco et al.(2019)]Gianturco2019 Gianturco, F. A., González-Sánchez, L., Mant, B. P., & Wester, R. 2019, , 151, 144304
[González-Sánchez et al.(2020)]Gonzalez-Sanchez2020 González-Sánchez, L., Mant, B. P., Wester, R., & Gianturco, F. A. 2020, , 897, 75
[Gottlieb et al.(2007)]Gottlieb2007 Gottlieb, C. A., Brünken, S., McCarthy, M. C., & Thaddeus, P. 2007, , 126, 191101
[Gupta et al.(2007)]Gupta2007 Gupta, H., Brünken, S., Tamassia, F., et al. 2007, , 655, L57
[Gupta et al.(2009)]Gupta2009 Gupta, H., Gottlieb, C. A., McCarthy, M. C., & Thaddeus, P. 2009, , 691, 1494
[Harada & Herbst(2008)]Harada2008 Harada, N. & Herbst, E. 2008, , 685, 272
[Herbst(1981)]Herbst1981 Herbst, E. 1981, , 289, 656
[Herbst & Osamura(2008)]Herbst2008 Herbst, E. & Osamura, Y. 2008, , 679, 1670
[Jiménez-Serra et al.(2016)]Jimenez-Serra2016 Jiménez-Serra, I., Vasyunin, A. I., Caselli, P., et al. 2016, , 830, L6
[Jørgensen et al.(2002)]Jorgensen2002 Jørgensen, J. K., Schöier, F. L., & van Dishoeck, E. F. 2002, , 389, 908
[Khamesian et al.(2016)]Khamesian2016 Khamesian, M., Douguet, N., Fonseca dos Santos, S., et al. 2016, , 117, 123001
[Kłos & Lique(2011)]Klos2011 Kłos, J. & Lique, F. 2011, , 418, 271
[Kołos et al.(2008)]Kolos2008 Kołos, R., Gronowski, M., & Botschwina, P. 2008, , 128, 154305
[Lara-Moreno et al.(2017)]Lara-Moreno2017 Lara-Moreno, M., Stoecklin, T., & Halvick, P. 2017, , 467, 4174
[Lara-Moreno et al.(2019)]Lara-Moreno2019 Lara-Moreno, M., Stoecklin, T., & Halvick, P. 2019, , 486, 414
[Lara-Moreno et al.(2021)]Lara-Moreno2021 Lara-Moreno, M., Stoecklin, T., & Halvick, P. 2021, , 507, 4086
[McCarthy et al.(1995)]McCarthy1995 McCarthy, M. C., Gottlieb, C. A., Thaddeus, P., et al. 1995, , 103, 7820
[McCarthy et al.(2006)]McCarthy2006 McCarthy, M. C., Gottlieb, C. A., Gupta, H., & Thaddeus, P. 2006, , 652, L141
[Marcelino et al.(2007)]Marcelino2007 Marcelino, N., Cernicharo, J., Agúndez, M., et al. 2007, , 665, L127
[Martínez et al.(2010)]Martinez2010 Martínez Jr., O., Yang, Z., Demarais, N. J., et al. 2010, , 720, 173
[Millar et al.(2017)]Millar2017 Millar, T. J., Walsh, C., & Field, T. A. 2017, , 117, 1765
[Murakami et al.(2022)]Murakami2022 Murakami, T., Iida, R., Hashimoto, Y., et al. 2022, , 126, 9244
[Oyama et al.(2020)]Oyama2020 Oyama, T., Ozaki, H., Sumiyoshi, Y., et al. 2020, , 890, 39
[Pardo et al.(2023)]Pardo2023 Pardo, J. R., Cabezas, C., Agúndez, M., et al. 2023, , submitted
[Petrie & Herbst(1997)]Petrie1997 Petrie, S. & Herbst, E. 1997, , 491, 210
[Punanova et al.(2018)]Punanova2018 Punanova, A., Caselli, P., Feng, S., et al. 2018, , 855, 112
[Remijan et al.(2007)]Remijan2007 Remijan, A. J., Hollis, J. M., Lovas, F. J., et al. 2007, , 664, L47
[Remijan et al.(2023)]Remijan2023 Remijan, A., Scolati, H. N., Burkhardt, A. M., et al. 2023, , 944, L45
[Sakai et al.(2007)]Sakai2007 Sakai, N., Sakai, T., Osamura, Y., & Yamamoto, S. 2007, , 667, L65
[Sakai et al.(2008)]Sakai2008 Sakai, N., Sakai, T., Hirota, T., & Yamamoto, S. 2008, , 672, 371
[Sakai et al.(2010)]Sakai2010 Sakai, N., Shiino, T., Hirota, T., et al. 2010, , 718, L49
[Senent et al.(2019)]Senent2019 Senent, M. L., Dayou, F., Dumouchel, F., et al. 2019, , 486, 422
[Spezzano et al.(2017)]Spezzano2017 Spezzano, S., Caselli, P., Bizzocchi, L., et al. 2017, , 606, A82
[Suzuki et al.(1992)]Suzuki1992 Suzuki, H., Yamamoto, S., Ohishi, M., et al. 1992, , 392, 551
[Tafalla et al.(2002)]Tafalla2002 Tafalla, M., Myers, P. C., Caselli, P., et al. 2002, , 569, 815
[Tchakoua et al.(2018)]Tchakoua2018 Tchakoua, T., Motapon, O., & Nsangou, M. 2018, , 51, 045202
[Tercero et al.(2021)]Tercero2021 Tercero, F., López-Pérez, J. A., Gallego, J. D., et al. 2021, , 645, A37
[Thaddeus et al.(2008)]Thaddeus2008 Thaddeus, P., Gottlieb, C. A., Gupta, H., et al. 2008, , 677, 1132
[Toumi et al.(2021)]Toumi2021 Toumi, I., Yazidi, O., & Najar, F. 2021, , 11, 13579
[Vastel et al.(2018)]Vastel2018 Vastel, C., Quénard, D., Le Gal, R., et al. 2018, , 478, 5514
[Visser et al.(2002)]Visser2002 Visser, A. E., Richer, J. S., & Chandler, C. J. 2002, , 124, 2756
[Vuitton et al.(2009)]Vuitton2009 Vuitton, V., Lavvas, P., Yelle, R. V., et al. 2009, , 57, 1558
[Walker et al.(2016)]Walker2016 Walker, K. M., Dumouchel, F., Lique, F., & Dawes, R. 2016, , 145, 024314
[Walker et al.(2017)]Walker2017 Walker, K. M., Lique, F., Dumouchel, F., & Dawes, R. 2017, , 466, 831
[Walker et al.(2018)]Walker2018 Walker, K. M., Lique, F., & Dawes, R. 2018, , 473, 1407
[Walsh et al.(2009)]Walsh2009 Walsh, C., Harada, N., Herbst, E., Millar, T. J. 2009, , 700, 752
[Woon(1995)]Woon1995 Woon, D. E. 1995, , 244, 45
[Yoshida et al.(2019)]Yoshida2019 Yoshida, K., Sakai, N., Nishimura, Y., et al. 2019, , 71, S18
§ SUPPLEMENTARY TABLE
lcc@c@cccc@c@ll
Observed line parameters of molecular anions in interstellar clouds.
1lSpecies 1cTransition 1cFrequency 1cV_ LSR 1cΔ v 1cT_A^* peak ^a 1c∫ T_A^* dv ^a Telescope Reference
1c 1c 1c(MHz) 1c(km s^-1) 1c(km s^-1) 1c(mK) 1c(mK km s^-1)
continued.
1lSpecies 1cTransition 1cFrequency 1cV_ LSR 1cΔ v 1cT_A^* peak ^a 1c∫ T_A^* dv ^a Telescope Reference
1c 1c 1c(MHz) 1c(km s^-1) 1c(km s^-1) 1c(mK) 1c(mK km s^-1)
11cTMC-1 CP
C_6H^- 4-3 11014.896 +5.80(2) 0.38(4) 25(3) 10.1(33) GBT <cit.>
5-4 13768.614 +5.80(11) 0.44(7) 24(3) 11.2(43) GBT <cit.>
10-9 27537.130 2*{ 2*41.6(90) ^b, c 2*} 2*GBT 2*<cit.>
11-10 30290.813
12-11 33044.488 +5.78(1) 0.73(1) 22.3(23) 17.4(18) Yebes 40m This work
13-12 35798.153 +5.78(1) 0.70(1) 20.9(22) 15.5(17) Yebes 40m This work
14-13 38551.808 +5.78(1) 0.64(2) 18.9(20) 12.8(14) Yebes 40m This work
15-14 41305.453 +5.79(2) 0.56(3) 17.2(19) 10.3(12) Yebes 40m This work
16-15 44059.085 +5.79(2) 0.57(3) 12.8(15) 7.7(10) Yebes 40m This work
17-16 46812.706 +5.81(2) 0.59(4) 9.6(13) 6.0(8) Yebes 40m This work
18-17 49566.313 +5.84(3) 0.56(5) 5.4(10) 3.2(5) Yebes 40m This work
C_4H^- 2-1 18619.761 +5.70(5) 0.43(13) 1.0(3) ^b, d GBT <cit.>
4-3 37239.410 +5.81(2) 0.71(2) 6.0(7) 4.5(6) Yebes 40m This work
5-4 46549.156 +5.81(2) 0.55(3) 5.8(8) 3.4(4) Yebes 40m This work
C_8H^- 11-10 12833.460 +5.71(5) 0.36(4) 8(1) 3.1(10) GBT <cit.>
12-11 14000.134 +5.86(5) 0.37(4) 7(1) 2.8(10) GBT <cit.>
13-12 15166.806 +5.84(6) 0.45(4) 6(1) 2.9(10) GBT <cit.>
16-15 18666.814 +5.80(7) 0.34(5) 10(2) 3.6(16) GBT <cit.>
27-26 31500.029 +5.82(4) 0.63(10) 1.28(28) 0.86(20) Yebes 40m This work
28-27 32666.670 +5.76(3) 0.76(6) 1.08(26) 0.87(15) Yebes 40m This work
29-28 33833.309 +5.90(12) 0.68(17) 0.78(19) 0.56(18) Yebes 40m This work
30-29 34999.944 +5.86(6) 0.60(10) 0.87(20) 0.56(14) Yebes 40m This work
31-30 36166.576 +5.83(8) 0.32(20) 1.01(24) 0.34(10) Yebes 40m This work
32-31 37333.205 +5.73(5) 0.66(11) 0.87(23) 0.61(16) Yebes 40m This work
33-32 38499.831 +5.81(9) 0.82(17) 0.68(20) 0.60(18) Yebes 40m This work
34-33 39666.453 +5.93(10) 0.40(12) 0.44(21) 0.19(7) ^e Yebes 40m This work
C_3N^- 4-3 38812.797 +5.78(1) 0.88(2) 4.2(2) 3.9(5) Yebes 40m This work
5-4 48515.872 +5.86(2) 0.61(4) 6.3(9) 4.1(6) Yebes 40m This work
8-7 77624.540 +5.88(3) 0.52(8) 7.1(17) 3.9(9) IRAM 30m This work
10-9 97029.687 +5.77(4) 0.38(6) 2.7(8) 1.1(3) IRAM 30m This work
C_5N^- 12-11 33332.570 +5.83(1) 0.71(3) 6.5(7) 4.9(6) Yebes 40m This work
13-12 36110.238 +5.80(1) 0.64(2) 6.1(7) 4.1(5) Yebes 40m This work
14-13 38887.896 +5.81(1) 0.63(2) 6.5(8) 4.4(5) Yebes 40m This work
15-14 41665.541 +5.82(2) 0.58(2) 5.7(7) 3.5(5) Yebes 40m This work
16-15 44443.173 +5.79(2) 0.56(2) 4.7(6) 2.8(4) Yebes 40m This work
17-16 47220.793 +5.81(2) 0.50(4) 3.6(6) 1.9(3) Yebes 40m This work
11cLupus-1A
C_6H^- 7-6 19276.037 +5.046(8) 0.16(2) 85(8) ^b 14(2) ^b GBT <cit.>
8-7 22029.741 +5.034(10) 0.17(2) 94(11) ^b 15(3) ^b GBT <cit.>
12-11 33044.488 +5.06(2) 0.59(3) 30.1(37) 18.9(24) Yebes 40m This work
13-12 35798.153 +5.08(2) 0.51(3) 32.9(40) 17.8(25) Yebes 40m This work
14-13 38551.808 +5.05(2) 0.48(4) 30.4(38) 15.7(20) Yebes 40m This work
15-14 41305.453 +5.09(3) 0.40(7) 32.7(42) 13.8(19) Yebes 40m This work
16-15 44059.085 +5.07(3) 0.55(6) 24.2(35) 14.2(22) Yebes 40m This work
17-16 46812.706 +5.10(6) 0.51(8) 17.1(33) 9.3(18) Yebes 40m This work
C_4H^- 4-3 37239.410 +5.078(13) 0.34(3) 59(5) ^b 19(5) ^b GBT <cit.>
4-3 37239.410 +5.04(4) 0.78(7) 7.4(14) 6.1(11) Yebes 40m This work
5-4 46549.156 +5.05(9) 0.45(12) 9.8(27) 4.7(13) Yebes 40m This work
9-8 83787.297 +5.23(6) 0.47(12) 10.4(31) 5.3(13) IRAM 30m This work
C_8H^- 16-15 18666.814 2*{ 2*+5.014(11) 2*0.09(3) 2*35(9) 2*4(1) ^b, c 2*} 2*GBT 2*<cit.>
18-17 21000.145
C_3N^- 4-3 38812.797 +5.16(15) 0.96(15) 2.8(10) 2.8(9) Yebes 40m This work
C_5N^- 12-11 33332.570 +5.11(7) 0.50(9) 8.4(16) 4.4(10) Yebes 40m This work
13-12 36110.238 +5.11(7) 0.44(9) 6.5(13) 3.1(7) Yebes 40m This work
14-13 38887.896 +5.13(7) 0.64(8) 8.0(17) 5.4(11) Yebes 40m This work
15-14 41665.541 +5.14(9) 0.37(10) 9.2(19) 3.7(9) Yebes 40m This work
16-15 44443.173 +5.09(10) 0.58(15) 6.1(18) 3.8(11) Yebes 40m This work
11cL1527
C_6H^- 7-6 19276.037 +5.93(9) 0.45(11) 14(3) ^b 7(2) ^b GBT <cit.>
8-7 22029.741 +5.89(3) 0.49(10) 26(4) ^b 18(4) ^b GBT <cit.>
12-11 33044.488 +5.90(5) 0.85(10) 9.6(14) 8.6(16) Yebes 40m This work
13-12 35798.153 +5.85(4) 0.60(4) 11.4(20) 7.3(18) Yebes 40m This work
14-13 38551.808 +5.84(3) 0.61(5) 12.0(18) 7.8(12) Yebes 40m This work
15-14 41305.453 +5.90(3) 0.60(4) 16.4(25) 10.4(19) Yebes 40m This work
16-15 44059.085 +5.90(3) 0.52(4) 14.5(23) 8.0(16) Yebes 40m This work
17-16 46812.706 +5.83(5) 0.58(8) 11.1(23) 6.8(14) Yebes 40m This work
C_4H^- 4-3 37239.410 +5.92(12) 0.80(20) 3.2(10) 2.7(7) Yebes 40m This work
5-4 46549.156 +6.05(15) 0.73(15) 4.9(19) 3.8(13) Yebes 40m This work
9-8 83787.297 +5.80(3) 0.62(9) 13(2) 8(1) IRAM 30m <cit.>
10-9 93096.550 +5.90(4) 0.59(9) 11(2) 7(1) IRAM 30m <cit.>
11cL483
C_6H^- 12-11 33044.488 +5.38(6) 0.66(8) 4.9(11) 3.4(8) Yebes 40m This work
13-12 35798.153 +5.33(5) 0.70(7) 5.8(10) 4.3(8) Yebes 40m This work
14-13 38551.808 +5.33(5) 0.78(7) 5.2(9) 4.3(9) Yebes 40m This work
15-14 41305.453 +5.29(6) 0.46(9) 5.3(12) 2.6(6) Yebes 40m This work
16-15 44059.085 +5.24(10) 0.75(12) 4.8(12) 3.8(10) Yebes 40m This work
17-16 46812.706 +5.34(7) 0.63(9) 5.0(14) 3.4(9) Yebes 40m This work
C_4H^- 4-3 37239.410 +5.39(8) 0.73(12) 2.8(7) 2.2(5) Yebes 40m This work
5-4 46549.156 +5.37(10) 0.44(15) 2.7(12) 1.3(5) ^e Yebes 40m This work
11cL1495B
C_6H^- 10-9 27537.130 2*{ 2*9.6(20) ^b, c 2*} 2*GBT 2*<cit.>
11-10 30290.813
12-11 33044.488 +7.66(5) 0.80(7) 5.9(12) 5.0(9) Yebes 40m This work
13-12 35798.153 +7.65(5) 0.50(8) 5.8(12) 3.1(6) Yebes 40m This work
14-13 38551.808 +7.58(7) 0.39(10) 4.3(11) 1.8(4) Yebes 40m This work
15-14 41305.453 +7.66(10) 0.36(14) 6.6(16) 2.6(6) Yebes 40m This work
16-15 44059.085 +7.61(8) 0.49(12) 4.1(11) 2.1(6) Yebes 40m This work
11cL1544
2*C_6H^- 2*7-6 2*19276.037 ^e 2*{ +7.08(3) 0.16(3) 16(2) 2*6.0(18) 2*} 2*GBT 2*<cit.>
+7.30(3) 0.13(3) 26(2)
12-11 33044.488 +7.11(13) 0.67(28) 4.5(16) 3.2(14) Yebes 40m This work
13-12 35798.153 +7.04(10) 0.48(16) 4.1(12) 2.1(9) Yebes 40m This work
14-13 38551.808 +6.98(8) 0.50(13) 6.0(16) 3.2(12) Yebes 40m This work
15-14 41305.453 +7.34(18) 0.76(36) 4.6(15) 3.7(16) Yebes 40m This work
11cL1521F
2*C_6H^- 2*7-6 2*19276.037 ^e 2*{ +6.33(5) 0.18(3) 17(2) 2*7.0(17) 2*} 2*GBT 2*<cit.>
+6.64(5) 0.35(9) 9(2)
11cL1251A
C_6H^- 10-9 27537.130 2*{ 2*6.5(17) ^b, c 2*} 2*GBT 2*<cit.>
11-10 30290.813
11cL1512
C_6H^- 10-9 27537.130 2*{ 2*4.3(8) ^b, c 2*} 2*GBT 2*<cit.>
11-10 30290.813
11cL1172
C_6H^- 10-9 27537.130 2*{ 2*6.7(15) ^b, c 2*} 2*GBT 2*<cit.>
11-10 30290.813
11cL1389
C_6H^- 10-9 27537.130 2*{ 2*5.9(14) ^b, c 2*} 2*GBT 2*<cit.>
11-10 30290.813
11cTMC-1 C
C_6H^- 10-9 27537.130 2*{ 2*13.6(25) ^b, c 2*} 2*GBT 2*<cit.>
11-10 30290.813
^a Unless otherwise stated, the intensity scale is antenna temperature (T_A^*). It can be converted to main beam brightness temperature (T_ mb) by dividing by B_ eff/F_ eff, where B_ eff is the main beam efficiency and F_ eff is the telescope forward efficiency. For the Yebes 40m telescope in the Q band B_ eff = 0.797 exp[-(ν(GHz)/71.1)^2] and F_ eff = 0.97 (), for the IRAM 30m telescope B_ eff = 0.871 exp[-(ν(GHz)/359)^2] and F_ eff = 0.95 (), and for the GBT telescope we adopt F_ eff = 1.0 and B_ eff = 1.32 × 0.71 exp[-(ν(GHz)/103.7)^2] <cit.>. The error in ∫ T_A^* dv includes the contributions from the Gaussian fit and from calibration (assumed to be 10 %). ^b Intensity scale is T_ mb. ^c Average of two lines. ^d Line neglected in the analysis. Intensity should be ∼ 3 times larger to be consistent with the other lines.
^e Line detected marginally.
llccll
Observed velocity-integrated line intensities of neutral counterparts of molecular anions in interstellar clouds.
1lSpecies 1cTransition 1cFrequency (MHz) 1c∫ T_A^* dv (mK km s^-1) ^a Telescope Reference
continued.
1lSpecies 1cTransition 1cFrequency (MHz) 1c∫ T_A^* dv (mK km s^-1) ^a Telescope Reference
6cTMC-1 CP
C_6H ^2Π_3/2 J=15/2-13/2 a 20792.907 133(24) ^b GBT <cit.>
^2Π_3/2 J=15/2-13/2 b 20794.475 112(22) ^b GBT <cit.>
^2Π_3/2 J=21/2-19/2 a 29109.658 332.4(420) ^b GBT <cit.>
^2Π_3/2 J=23/2-21/2 a 31881.860 175.6(176) Yebes 40m This work
^2Π_3/2 J=23/2-21/2 b 31885.541 173.5(175) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 a 34654.037 158.9(160) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 b 34658.383 158.5(160) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 a 37426.192 141.5(180) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 b 37431.255 141.1(175) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 a 40198.323 119.3(149) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 b 40204.157 118.6(147) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 a 42970.432 93.4(106) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 b 42977.089 93.3(106) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 a 45742.519 73.0(98) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 b 45750.052 73.4(99) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 a 48514.584 52.6(73) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 b 48523.044 52.2(70) Yebes 40m This work
C_4H N=2-1 J=3/2-1/2 19054.476 411.3(418) ^b GBT <cit.>
N=4-3 J=9/2-7/2 38049.654 1369(138) Yebes 40m This work
N=4-3 J=7/2-5/2 38088.461 1007(102) Yebes 40m This work
N=5-4 J=11/2-9/2 47566.792 1094(111) Yebes 40m This work
N=5-4 J=9/2-7/2 47605.496 864(87) Yebes 40m This work
N=9-8 J=19/2-17/2 85634.010 417(53) IRAM 30m <cit.>
N=9-8 J=17/2-15/2 85672.580 386(49) IRAM 30m <cit.>
N=10-9 J=21/2-19/2 95150.393 251(26) IRAM 30m This work
N=10-9 J=19/2-17/2 95188.947 243(26) IRAM 30m This work
N=11-10 J=23/2-21/2 104666.568 111(12) IRAM 30m This work
N=11-10 J=21/2-19/2 104705.108 105(13) IRAM 30m This work
N=12-11 J=25/2-23/2 114182.523 60(8) IRAM 30m This work
N=12-11 J=23/2-21/2 114221.023 47(6) IRAM 30m This work
C_8H ^2Π_3/2 J=53/2-51/2 a 31093.035 6.0(7) Yebes 40m This work
^2Π_3/2 J=53/2-51/2 b 31093.415 4.4(6) Yebes 40m This work
^2Π_3/2 J=55/2-53/2 a 32266.325 4.3(6) Yebes 40m This work
^2Π_3/2 J=55/2-53/2 b 32266.735 4.2(6) Yebes 40m This work
^2Π_3/2 J=57/2-55/2 a 33439.612 3.5(5) Yebes 40m This work
^2Π_3/2 J=57/2-55/2 b 33440.052 3.4(6) Yebes 40m This work
^2Π_3/2 J=59/2-57/2 b 34613.367 2.7(3) Yebes 40m This work
^2Π_3/2 J=61/2-59/2 a 35786.176 3.0(4) Yebes 40m This work
^2Π_3/2 J=61/2-59/2 b 35786.679 2.4(3) Yebes 40m This work
^2Π_3/2 J=63/2-61/2 a 36959.452 2.3(3) Yebes 40m This work
^2Π_3/2 J=63/2-61/2 b 36959.989 2.2(3) Yebes 40m This work
^2Π_3/2 J=65/2-63/2 a 38132.725 1.7(2) Yebes 40m This work
^2Π_3/2 J=65/2-63/2 b 38133.297 1.5(2) Yebes 40m This work
^2Π_3/2 J=67/2-65/2 a 39305.995 1.4(2) Yebes 40m This work
^2Π_3/2 J=67/2-65/2 b 39306.602 1.4(2) Yebes 40m This work
^2Π_3/2 J=69/2-67/2 a 40479.260 1.2(2) Yebes 40m This work
^2Π_3/2 J=69/2-67/2 b 40479.904 1.2(2) Yebes 40m This work
^2Π_3/2 J=71/2-69/2 a 41652.522 0.8(1) Yebes 40m This work
^2Π_3/2 J=71/2-69/2 b 41653.203 0.9(1) Yebes 40m This work
^2Π_3/2 J=73/2-71/2 a 42825.779 0.7(1) Yebes 40m This work
^2Π_3/2 J=73/2-71/2 b 42826.499 0.7(1) Yebes 40m This work
C_3N N=4-3 J=9/2-7/2 39571.347 332(34) Yebes 40m This work
N=4-3 J=7/2-5/2 39590.181 240(25) Yebes 40m This work
N=5-4 J=11/2-9/2 49466.421 244(25) Yebes 40m This work
N=5-4 J=9/2-7/2 49485.224 198(20) Yebes 40m This work
N=9-8 J=19/2-17/2 89045.583 64.2(73) IRAM 30m This work
N=9-8 J=17/2-15/2 89064.347 58.6(68) IRAM 30m This work
N=10-9 J=21/2-19/2 98940.087 28.1(36) IRAM 30m This work
N=10-9 J=19/2-17/2 98958.770 22.7(30) IRAM 30m This work
N=11-10 J=23/2-21/2 108834.254 11.6(24) IRAM 30m This work
N=11-10 J=21/2-19/2 108853.012 21.2(35) IRAM 30m This work
C_5N N=12-11 J=25/2-23/2 33668.234 5.6(7) Yebes 40m This work
N=12-11 J=23/2-21/2 33678.966 5.9(7) Yebes 40m This work
N=13-12 J=27/2-25/2 36474.308 5.8(7) Yebes 40m This work
N=13-12 J=25/2-23/2 36485.042 5.5(7) Yebes 40m This work
N=14-13 J=29/2-27/2 39280.369 5.1(7) Yebes 40m This work
N=14-13 J=27/2-25/2 39291.105 5.0(7) Yebes 40m This work
N=15-14 J=31/2-29/2 42086.415 4.7(6) Yebes 40m This work
N=15-14 J=29/2-27/2 42097.151 4.4(6) Yebes 40m This work
N=16-15 J=33/2-31/2 44892.444 4.6(6) Yebes 40m This work
N=16-15 J=31/2-29/2 44903.182 4.4(6) Yebes 40m This work
N=17-16 J=35/2-33/2 47698.457 3.7(5) Yebes 40m This work
N=17-16 J=33/2-31/2 47709.196 3.4(5) Yebes 40m This work
6cLupus-1A
C_6H ^2Π_3/2 J=15/2-13/2 a 20792.907 114(14) ^b GBT <cit.>
^2Π_3/2 J=15/2-13/2 b 20794.475 131(16) ^b GBT <cit.>
^2Π_3/2 J=23/2-21/2 a 31881.860 150.3(166) Yebes 40m This work
^2Π_3/2 J=23/2-21/2 b 31885.541 153.1(163) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 a 34654.037 151.6(161) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 b 34658.383 150.0(159) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 a 37426.192 140.3(143) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 b 37431.255 141.0(148) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 a 40198.323 126.2(134) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 b 40204.157 124.8(130) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 a 42970.432 115.5(123) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 b 42977.089 114.9(123) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 a 45742.519 90.7(125) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 b 45750.052 91.3(128) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 a 48514.584 73.6(109) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 b 48523.044 66.9(103) Yebes 40m This work
C_4H N=4-3 J=9/2-7/2 38049.654 1219(123) Yebes 40m This work
N=4-3 J=7/2-5/2 38088.461 921(94) Yebes 40m This work
N=5-4 J=11/2-9/2 47566.792 1123(114) Yebes 40m This work
N=5-4 J=9/2-7/2 47605.496 846(86) Yebes 40m This work
N=8-7 J=17/2-15/2 76117.439 1124(114) IRAM 30m This work
N=8-7 J=15/2-13/2 76156.028 1024(104) IRAM 30m This work
N=9-8 J=19/2-17/2 85634.010 779(83) IRAM 30m This work
N=9-8 J=17/2-15/2 85672.580 730(77) IRAM 30m This work
N=11-10 J=23/2-21/2 104666.568 349(39) IRAM 30m This work
N=11-10 J=21/2-19/2 104705.108 334(38) IRAM 30m This work
C_8H ^2Π_3/2 J=33/2-31/2 a 19359.975 10(2) ^b GBT <cit.>
^2Π_3/2 J=33/2-31/2 b 19360.123 9(2) ^b GBT <cit.>
C_3N N=4-3 J=9/2-7/2 39571.347 251(30) Yebes 40m This work
N=4-3 J=7/2-5/2 39590.181 175(19) Yebes 40m This work
N=5-4 J=11/2-9/2 49466.421 177(19) Yebes 40m This work
N=5-4 J=9/2-7/2 49485.224 138(15) Yebes 40m This work
N=9-8 J=19/2-17/2 89045.583 141.5(150) IRAM 30m This work
N=9-8 J=17/2-15/2 89064.347 126.7(136) IRAM 30m This work
N=10-9 J=21/2-19/2 98940.087 74.6(83) IRAM 30m This work
N=10-9 J=19/2-17/2 98958.770 66.0(74) IRAM 30m This work
C_5N N=12-11 J=25/2-23/2 33668.234 4.5(12) Yebes 40m This work
N=12-11 J=23/2-21/2 33678.966 7.0(14) Yebes 40m This work
N=13-12 J=27/2-25/2 36474.308 4.8(11) Yebes 40m This work
N=13-12 J=25/2-23/2 36485.042 5.7(11) Yebes 40m This work
N=14-13 J=29/2-27/2 39280.369 7.8(24) Yebes 40m This work
N=14-13 J=27/2-25/2 39291.105 5.7(15) Yebes 40m This work
N=15-14 J=31/2-29/2 42086.415 4.1(9) Yebes 40m This work
N=15-14 J=29/2-27/2 42097.151 4.8(11) Yebes 40m This work
N=16-15 J=33/2-31/2 44892.444 3.2(9) Yebes 40m This work
N=16-15 J=31/2-29/2 44903.182 1.8(8) ^d Yebes 40m This work
6cL1527
C_6H ^2Π_3/2 J=15/2-13/2 a 20792.907 24(5) ^b GBT <cit.>
^2Π_3/2 J=15/2-13/2 b 20794.475 21(5) ^b GBT <cit.>
^2Π_3/2 J=23/2-21/2 a 31881.860 34.8(75) Yebes 40m This work
^2Π_3/2 J=23/2-21/2 b 31885.541 26.0(59) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 a 34654.037 29.3(34) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 b 34658.383 31.8(37) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 a 37426.192 31.7(46) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 b 37431.255 32.2(51) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 a 40198.323 32.7(50) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 b 40204.157 32.3(48) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 a 42970.432 30.2(47) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 b 42977.089 31.1(49) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 a 45742.519 30.5(48) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 b 45750.052 31.3(49) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 a 48514.584 27.3(48) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 b 48523.044 26.9(47) Yebes 40m This work
C_4H N=4-3 J=9/2-7/2 38049.654 388(39) Yebes 40m This work
N=4-3 J=7/2-5/2 38088.461 295(30) Yebes 40m This work
N=5-4 J=11/2-9/2 47566.792 434(44) Yebes 40m This work
N=5-4 J=9/2-7/2 47605.496 347(35) Yebes 40m This work
N=9-8 J=19/2-17/2 85634.010 747(86) IRAM 30m <cit.>
N=9-8 J=17/2-15/2 85672.580 712(82) IRAM 30m <cit.>
N=11-10 J=23/2-21/2 104666.568 542(64) IRAM 30m <cit.>
N=11-10 J=21/2-19/2 104705.108 487(59) IRAM 30m <cit.>
N=12-11 J=25/2-23/2 114182.523 462(59) IRAM 30m <cit.>
N=12-11 J=23/2-21/2 114221.023 406(53) IRAM 30m <cit.>
6cL483
C_6H ^2Π_3/2 J=23/2-21/2 a 31881.860 29.4(34) Yebes 40m This work
^2Π_3/2 J=23/2-21/2 b 31885.541 31.0(36) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 a 34654.037 28.4(32) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 b 34658.383 27.7(31) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 a 37426.192 26.2(29) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 b 37431.255 26.2(30) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 a 40198.323 24.4(28) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 b 40204.157 23.2(27) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 a 42970.432 19.7(23) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 b 42977.089 20.4(24) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 a 45742.519 13.6(22) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 b 45750.052 14.2(21) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 a 48514.584 13.0(23) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 b 48523.044 13.7(24) Yebes 40m This work
C_4H N=4-3 J=9/2-7/2 38049.654 470(48) Yebes 40m This work
N=4-3 J=7/2-5/2 38088.461 356(36) Yebes 40m This work
N=5-4 J=11/2-9/2 47566.792 439(50) Yebes 40m This work
N=5-4 J=9/2-7/2 47605.496 352(36) Yebes 40m This work
N=8-7 J=17/2-15/2 76117.439 375(38) IRAM 30m This work
N=8-7 J=15/2-13/2 76156.028 337(35) IRAM 30m This work
N=9-8 J=19/2-17/2 85634.010 272(27) IRAM 30m <cit.>
N=9-8 J=17/2-15/2 85672.580 249(24) IRAM 30m <cit.>
N=10-9 J=21/2-19/2 95150.393 157(15) IRAM 30m <cit.>
N=10-9 J=19/2-17/2 95188.947 147(14) IRAM 30m <cit.>
N=11-10 J=23/2-21/2 104666.568 110(10) IRAM 30m <cit.>
N=11-10 J=21/2-19/2 104705.108 100(9) IRAM 30m <cit.>
N=12-11 J=25/2-23/2 114182.523 64(6) IRAM 30m <cit.>
N=12-11 J=23/2-21/2 114221.023 64(6) IRAM 30m <cit.>
6cL1495B
C_6H ^2Π_3/2 J=13/2-11/2 a 18020.606 55(10) ^c GBT <cit.>
^2Π_3/2 J=13/2-11/2 b 18021.783 55(10) ^c GBT <cit.>
^2Π_3/2 J=21/2-19/2 a 29109.658 141.6(164) ^b GBT <cit.>
^2Π_3/2 J=23/2-21/2 a 31881.860 51.9(59) Yebes 40m This work
^2Π_3/2 J=23/2-21/2 b 31885.541 47.9(53) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 a 34654.037 46.8(52) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 a 37426.192 45.4(51) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 b 37431.255 42.8(49) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 a 40198.323 36.2(42) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 b 40204.157 37.7(42) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 a 42970.432 33.3(40) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 b 42977.089 33.6(40) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 a 45742.519 24.7(38) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 b 45750.052 24.0(35) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 a 48514.584 19.4(32) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 b 48523.044 18.6(33) Yebes 40m This work
6cL1544
C_6H ^2Π_3/2 J=13/2-11/2 a 18020.606 51(11) GBT <cit.>
^2Π_3/2 J=13/2-11/2 b 18021.783 50(11) GBT <cit.>
^2Π_3/2 J=23/2-21/2 a 31881.860 23.8(36) Yebes 40m This work
^2Π_3/2 J=23/2-21/2 b 31885.541 30.0(44) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 a 34654.037 25.7(39) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 b 34658.383 31.6(48) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 a 37426.192 23.3(36) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 b 37431.255 19.9(34) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 b 40204.157 18.0(31) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 a 42970.432 13.6(26) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 b 42977.089 12.1(23) Yebes 40m This work
6cL1521F
C_6H ^2Π_3/2 J=13/2-11/2 a 18020.606 36(10) GBT <cit.>
^2Π_3/2 J=13/2-11/2 b 18021.783 26(9) GBT <cit.>
6cL1251A
C_6H ^2Π_3/2 J=21/2-19/2 a 29109.658 36(8) GBT <cit.>
^2Π_3/2 J=21/2-19/2 b 29112.730 35(8) GBT <cit.>
^2Π_3/2 J=21/2-19/2 a 29109.658 43.6(65) ^b GBT <cit.>
6cL1512
C_6H ^2Π_3/2 J=13/2-11/2 a 18020.606 20(7) ^c GBT <cit.>
^2Π_3/2 J=13/2-11/2 b 18021.783 20(7) ^c GBT <cit.>
^2Π_3/2 J=21/2-19/2 a 29109.658 27(5) GBT <cit.>
^2Π_3/2 J=21/2-19/2 b 29112.730 28(5) GBT <cit.>
^2Π_3/2 J=21/2-19/2 a 29109.658 26.3(35) ^b GBT <cit.>
6cL1172
C_6H ^2Π_3/2 J=21/2-19/2 a 29109.658 41.1(57) ^b GBT <cit.>
6cL1389
C_6H ^2Π_3/2 J=13/2-11/2 a 18020.606 10(6) ^c GBT <cit.>
^2Π_3/2 J=13/2-11/2 b 18021.783 10(6) ^c GBT <cit.>
^2Π_3/2 J=21/2-19/2 a 29109.658 27.1(40) ^b GBT <cit.>
6cTMC-1 C
C_6H ^2Π_3/2 J=21/2-19/2 a 29109.658 88.1(105) ^b GBT <cit.>
^a Unless otherwise stated, the intensity scale is antenna temperature (T_A^*). It can be converted to main beam brightness temperature (T_ mb) by dividing by B_ eff/F_ eff (see caption of Table <ref>. The error in ∫ T_A^* dv includes the contributions from the Gaussian fit and from calibration (assumed to be 10 %). ^b Intensity scale is T_ mb. ^c Intensity distributed equally among the two fine components. ^d Marginal detection.
|
http://arxiv.org/abs/2307.06342v1 | 20230712114554 | ConvNeXt-ChARM: ConvNeXt-based Transform for Efficient Neural Image Compression | [
"Ahmed Ghorbel",
"Wassim Hamidouche",
"Luce Morin"
] | cs.CV | [
"cs.CV",
"eess.IV"
] |
Experimental detectability of spin current shot noise
Sebastian T. B. Goennenwein^1
August 12, 2023
=====================================================
Over the last few years, neural image compression has gained wide attention from research and industry, yielding promising end-to-end deep neural codecs outperforming their conventional counterparts in rate-distortion performance.
Despite significant advancement, current methods, including attention-based transform coding, still need to be improved in reducing the coding rate while preserving the reconstruction fidelity, especially in non-homogeneous textured image areas. Those models also require more parameters and a higher decoding time.
To tackle the above challenges, we propose ConvNeXt-ChARM, an efficient ConvNeXt-based transform coding framework, paired with a compute-efficient channel-wise auto-regressive prior to capturing both global and local contexts from the hyper and quantized latent representations. The proposed architecture can be optimized end-to-end to fully exploit the context information and extract compact latent representation while reconstructing higher-quality images.
Experimental results on four widely-used datasets showed that ConvNeXt-ChARM brings consistent and significant BD-rate (PSNR) reductions estimated on average to 5.24% and 1.22% over the vvc reference encoder (VTM-18.0) and the state-of-the-art learned image compression method SwinT-ChARM, respectively.
Moreover, we provide model scaling studies to verify the computational efficiency of our approach and conduct several objective and subjective analyses to bring to the fore the performance gap between the next generation ConvNet, namely ConvNeXt, and Swin Transformer.
All materials, including the source code of SwinT-ChARM, will be made publicly accessible upon acceptance for reproducible research.
§ INTRODUCTION
Visual information is crucial in human development, communication, and engagement, and its compression is necessary for effective storage and transmission over constrained wireless/wireline channels.
Thus, thinking about new lossy image compression approaches is a goldmine for scientific research. The goal is to reduce an image file size by permanently removing less critical information, particularly redundant data and high frequencies, to obtain the most compact bit-stream representation while preserving a certain level of visual fidelity. Nevertheless, the high compress rate and low distortion are fundamentally opposing objectives involving optimizing the rate-distortion tradeoff.
Conventional image and video compression standards including JPEG <cit.>, JPEG2000 <cit.>, H.265/hevc <cit.>, and H.266/vvc <cit.>, rely on hand-crafted creativity to present module-based encoder/decoder block diagram. In addition, these codecs employ intra-prediction, fixed transform matrices, quantization, context-adaptive arithmetic coders, and various in-loop filters to reduce spatial and statistical redundancies, and alleviate coding artifacts. However, it has taken several years to standardize a conventional codec. Moreover, existing image compression standards are not anticipated to be an ideal and global solution for all types of image content due to the rapid development of new image formats and the growth of high-resolution mobile devices.
Lossy image compression consists of three modular parts: transform, quantization, and entropy coding. Each of these components can be represented as follows: i) autoencoders as flexible nonlinear transforms where the encoder (i.e., analysis transform) extracts latent representation from an input image and the decoder (i.e., synthesis transform) reconstructs the image from the decoded latent, ii) various differentiable quantization approaches which encode the latent into bitstream through arithmetic coding algorithms, iii) deep generative models as potent learnable entropy models estimating the conditional probability distribution of the latent to reduce the rate. Moreover, these three components can be optimized with end-to-end training by reducing the joint loss of the distortion between the original image and its reconstruction and the rate needed to transmit the bitstream of latent representation.
Thanks to recent advances in deep learning, we have seen many works exploring the potential of ann to form various learned image and video compression frameworks. Over the past two years, the performance of neural compression has steadily improved thanks to the prior line of study, reaching or outperforming state-of-the-art conventional codecs.
Some previous works use local context <cit.>, or additional side information <cit.> to capture short-range spatial dependencies, and others use non-local mechanism <cit.> as long-range spatial dependencies. Recently, Toderici <cit.> proposed a generative compression method achieving high-quality reconstructions, Minnen <cit.> introduced channel-conditioning and latent residual prediction taking advantage of an entropy-constrained model that uses both forward and backward adaptations, and Zhu <cit.> replaced all convolutions in the charm prior approach <cit.> with Swin Transformer <cit.> blocks, Zou <cit.> combined the local-aware attention mechanism with the global-related feature learning and proposed a window-based attention module, Koyuncu et al. <cit.> proposed a Transformer-based context model, which generalizes the standard attention mechanism to spatio-channel attention, Zhu <cit.> proposed a probabilistic vector quantization with cascaded estimation under a multi-codebooks structure, Kim <cit.> exploited the joint global and local hyperpriors information in a content-dependent manner using an attention mechanism, and He <cit.> adopted stacked residual blocks as nonlinear transform and multi-dimension entropy estimation model.
One of the main challenges of learned transform coding is the ability to identify the crucial information necessary for the reconstruction, knowing that information overlooked during encoding is usually lost and unrecoverable for decoding. Another main challenge is the tradeoff between performance and decoding speed. While the existing approaches improve the transform and entropy coding accuracy, they remain limited by the higher decoding runtime and excessive model complexity leading to an ineffective real-world use. Finally, we found that attention-based networks taking advantage of attention mechanisms to capture global dependencies, such as Swin Transformer <cit.>, have over-smoothed and contain undesirable artifacts at low bitrates. Furthermore, the global semantic information in image compression is less effective than in other computer vision tasks <cit.>.
In this paper, we propose a nonlinear transform built on ConvNeXt blocks with additional down and up sampling layers and paired with a charm prior, namely ConvNeXt-ChARM. Recently proposed in <cit.>, ConvNeXt is defined as a modernized ResNet architecture toward the design of a vision Transformer, which competes favorably with Transformers in terms of efficiency, achieving state-of-the-art on ImageNet classification task <cit.> and outperforming Swin Transformer on COCO detection <cit.> and ADE20K segmentation <cit.> challenges while maintaining the maturity and simplicity of convnet <cit.>. The contributions of this paper are summarized as follows:
* We propose a learned image compression model that leverages a stack of ConvNeXt blocks with down and up-sampling layers for extracting contextualized and nonlinear information for effective latent decorrelation. We maintain the convolution strengths like sliding window strategy for computations sharing, translation equivariance as a built-in inductive bias, and the local nature of features, which are intrinsic to providing a better spatial representation.
* We apply ConvNeXt-based transform coding layers for generating and decoding both latent and hyper-latent to consciously and subtly balance the importance of feature compression through the end-to-end learning framework.
* We conduct experiments on four widely-used evaluation datasets to explore possible coding gain sources and demonstrate the effectiveness of ConvNeXt-ChARM. In addition, we carried out a model scaling analysis to compare the complexity of ConvNeXt and Swin Transformer.
Extensive experiments validate that the proposed ConvNeXt-ChARM achieves state-of-the-art compression performance, as illustrated in Figure <ref>, outperforming conventional and learned image compression methods in the tradeoff between coding efficiency and decoder complexity.
The rest of this paper is organized as follows. Section <ref> presents our overall framework along with a detailed description of the proposed architecture. Next, we dedicate Section <ref> to describe and analyze the experimental results. Finally, Section <ref> concludes the paper.
§ BACKGROUND AND RELATED WORKS
Through this section, we review relevant conventional and learned compression techniques, including some works related to our research, and focus on the following three aspects: first, we introduce the traditional hand-crafted compression methods; then, we briefly describe the end-to-end learned compression methods that have recently emerged, including the attention-guided coding and the auto-regressive context.
§.§ Conventional Image Compression
Conventional image compression standards, such as JPEG <cit.>, JPEG2000 <cit.>, H.265/hevc <cit.>, and the latest H.266/vvc <cit.>, generally adopt block-based hybrid coding structure, i.e., Intra prediction <cit.>, transform <cit.>, quantization <cit.>, context-based adaptive arithmetic coding <cit.>, and post-processing <cit.> to reduce spatial redundancy in the input image for higher compression efficiency. Traditional coding algorithms have a lot of advantages, including mature technology, easy SW/HW implementations, low decoding complexity, and strong adaptation ability. However, all of them mainly rely on hand-crafted coding techniques. As a result, it is not easy for conventional image compression technologies to optimize rate-distortion cost directly for various image contents and resolutions. This issue restricts the ability of conventional coding algorithms to increase their compression efficiency, especially for adopting perceptual quality metrics.
§.§ Learned Image Compression
0 pt
Overview: Learning-based image compression models are trained in an end-to-end fashion to minimize distortion between two images, the source, and its reconstruction while maximizing the likelihood of the quantized latent representation for low entropy coding cost (rate). A Lagrange multiplier λ controls the tradeoff between the rate R and the distortion D <cit.>.
Since the early rnn-based methods <cit.> for lossy image compression, significant advancements have been made in integrating tailored modules for learned image compression. A nonlinear normalization method known as gdn, proposed by Balle <cit.>, demonstrates impressive abilities in decorrelating data from natural images. In order to represent the global dependencies between latent variables, Zhang <cit.> propose a non-local attention block that takes advantage of the expressive potential of residual connections.
Most recently proposed learned methods <cit.> adopt convnet and can be interpreted as vae architecture based on transform coding. This architecture creates a compact representation of the image by encoding them as vectors in a latent space. The compressive transform squeezes out the redundancy in the image with dimensional reduction and entropy constraints.
Attention-guided coding: Attention mechanism was popularized in nlp <cit.>. It can be described as a mapping strategy that queries a set of key-value pairs to an output. For example, Vaswani <cit.> have proposed mha methods in which machine translation is frequently used. For low-level vision tasks <cit.>, spatially adaptive feature activation is made possible by the attention mechanism, with a focus on more complex areas, like rich textures, saliency, etc.
In image compression, quantized attention masks are used for adaptive bit allocation, e.g., Li <cit.> used a trimmed convolutional network to predict the conditional probability of quantized codes, Mentzer <cit.> used a 3D-CNN based context model to learn a conditional probability model of the latent distribution, Cheng <cit.> inserted a simplified attention module (without the non-local block) into the analysis and synthesis transforms to pay more attention to complex regions. Later on, Zou <cit.> combined the local-aware attention mechanism with the global-related feature learning and proposed an effective window-based local attention block, which can be used as a straightforward component to enhance convnet and Transformer models.
Recently, Transformers are increasingly used in neural image compression methods. They exempt convolution operators entirely and rely solely on attention mechanisms to capture the interactions between inputs, regardless of their relative position to one another, thus allowing the network to focus more on pertinent elements of the input data. Qian <cit.> replaced the auto-regressive hyperprior <cit.> with a self-attention stack and introduced a novel Transformer-based entropy model, where the Transformer's self-attention is used to relate different positions of a single latent in order to compute a representation of the latents. Zhu <cit.> replaced all convolutions in the standard approaches <cit.> with Swin Transformer <cit.> blocks, leading to a more flexible receptive field to adapt tasks requiring both short/long-range information, and better progressive decoding of latents. Apart from their effective window-based local attention block, Zou <cit.> proposed a novel symmetrical Transformer (STF) framework with absolute Transformer blocks for transform coding and combined with a charm prior. Inspired by the adaptive characteristics of the transformers, Koyuncu <cit.> proposed a transformer-based context model, which generalizes the de facto standard attention mechanism to spatio-channel attention.
Auto-regressive context: In the mean-scale hyperprior framework <cit.>, an additional module called context model is added to boost the rate-distortion performance. Considering the entire autoencoder structure (g_a, g_s), hyper autoencoder (h_a, h_s), context model g_cm, and a parameter inference network g_ep which estimates the location and scale parameters Φ=(μ, σ) of the entropy model for latent ŷ. Let h_s(ẑ) denotes the hyperprior feature and g_cm(ŷ_<i) denotes the context feature. The parameter prediction for i-th representation ŷ_i is expressed as follows:
Φ_i=g_ep(h_s(ẑ), g_cm(ŷ_<i)),
where Φ_i=(μ_i, σ_i) is used to jointly predict entropy parameters accompanied by the hyperprior feature, and ŷ_<i ={ŷ_1, …, ŷ_i-1} is the observable neighbors of each symbol vector ŷ_i at the i-th location.
Combined with the auto-regressive context model, Cheng <cit.> is the first work to achieve comparable peak signal to noise (PSNR) with vvc. They improved the entropy model by using a discretized K-component gmm:
p_ŷ|z(ŷ_i|ẑ)=∑_0<k<Kπ_i^k[𝒩_(μ_i^k, σ_i^2k) * 𝒰_(-1/2, 1/2)](ŷ_i),
where K groups of entropy parameters (π^k, μ^k, σ^k) are calculated by g_ep.
Minnen <cit.> estimated both the mean and standard deviation of the latent distribution and incorporated an auto-regressive context model to condition the already-decoded latent slices and the latent rounding residual on the hyperprior to further reduce the spatial redundancy between adjacent pixels.
Finally, He <cit.> propose a parallelizable spatial context model based on the checkerboard-shaped convolution that allows implementing the decoding in a highly parallel manner, thus increasing the decoding speed.
§ PROPOSED CONVNEXT-CHARM MODEL
§.§ Problem Formulation
The objective of learned image compression is to minimize the distortion between the original image and its reconstruction under a specific distortion-controlling hyper-parameter. Assuming an input image x, the analysis transform g_a, with parameter ϕ_g, removes the image spatial redundancies and generates the latent representation y. Then, this latent is quantized to the discrete code ŷ using the quantization operator ⌈.⌋, from which a synthesis transform g_s, with parameter θ_g, reconstructs the image denoted by x̂. The overall process can be formulated as follows:
y = g_a( x|ϕ_g),
ŷ = ⌈y⌋,
x̂ = g_s(ŷ|θ_g).
A hyperprior model composed of a hyper-analysis and hyper-synthesis transforms (h_a, h_s) with parameters (ϕ_h, θ_h) is usually used to reduce the statistical redundancy among latent variables. In particular, this hyperprior model assigns a few extra bits as side information to transmit some spatial structure information and helps to learn an accurate entropy model. The hyperprior generation can be summarized as follows:
z = h_a(y|ϕ_h),
ẑ = ⌈z⌋,
p_ŷ|ẑ(ŷ|ẑ) ← h_s(ẑ|θ_h).
Transform and quantization introduce a distortion D = MSE(x, x̂), for mean squared error (MSE) optimization that measures the reconstruction quality with an estimated bitrate R, corresponding to the expected rate of the quantized latents and hyper-latents, as described bellow:
R = 𝔼 [-log _2(p_ŷ|ẑ(ŷ|ẑ)) -log _2(p_ẑ(ẑ)) ].
Representing (g_a,g_s), (h_a,h_s), and entropy model by dnn enables jointly optimizing the end-to-end model by minimizing the rate-distortion tradeoff ℒ, giving a rate-controlling hyper-parameter λ. This optimization problem can be presented as follows:
ℒ = R+λ D,
= ℍ(ŷ) + ℍ (ẑ) _R + λ MSE(x, x̂),
where ℍ stands for the entropy.
§.§ ConvNeXt-ChARM network architecture
To better parameterize the distributions of the quantized latent features with a more accurate and flexible entropy model, we adopted the charm prior approach proposed in <cit.> to build an efficient ConvNeXt-based learning image compression model with strong compression performance. As shown in Figure <ref>, the analysis/synthesis transform (g_a,g_s) of our design consists of a combination of down and up-sampling blocks and ConvNeXt encoding/decoding blocks <cit.>, respectively. Down and up-sampling blocks are performed using Conv2D and Normalisation layers sequentially. The architectures for hyper-transforms (h_a,h_s) are similar to (g_a,g_s) with different stages and configurations.
§.§ ConvNeXt design description
Globally, ConvNeXt incorporates a series of architectural choices from a Swin Transformer while maintaining the network's simplicity as a standard convnet without introducing any attention-based modules. These design decisions can be summarized as follows: macro design, ResNeXt's grouped convolution, inverted bottleneck, large kernel size, and various layer-wise micro designs. In Figure <ref>, we illustrates the ConvNeXt block, where the DConv2D(.) refers for the a depthwise 2D convolution, LayerNorm for the layer normalization, Dense(.) for the densely-connected NN layer, and GELU for the activation function.
Macro design:
The stage compute ratio is adjusted from (3, 4, 6, 3) in ResNet-50 to (3, 3, 9, 3), which also aligns the FLOPs with Swin-T. In addition, the ResNet-style stem cell is replaced with a patchify layer implemented using a 2×2, stride two non-overlapping convolutional layers with an additional normalization layer to help stabilize the training. In ConvNeXt-ChARM diagram, we adopted the (3, 3, 9, 3) and (5, 1) as stage compute ratios for transforms and hyper-transforms, respectively.
Depthwise convolution:
The ConvNeXt block uses a depthwise convolution, a special case of grouped convolution used in ResNeXt <cit.>, where the number of groups is equal to the considered channels. This is similar to the weighted sum operation in self-attention, which operates by mixing information only in the spatial dimension.
Inverted bottleneck:
Similar to Transformers, ConvNeXt is designed with an inverted bottleneck block, where the hidden dimension of the residual block is four times wider than the input dimension. As illustrated in the ConvNeXt block Figure <ref>, the first dense layer is 4 times wider then the second one.
large kernel:
One of the most distinguishing aspects of Swin Transformers is their local window in the self-attention block. The information is propagated across windows, which enables each layer to have a global receptive field. The local window is at least 7×7 sized, which is still more extensive than the 3×3 ResNeXt kernel size. Therefore, ConvNeXt adopted large kernel-sized convolutions by using a 7×7 depthwise 2D convolution layer in each block. This allows our ConvNeXt-ChARM model to capture global contexts in both latents and hyper-latents, which are intrinsic to providing a better spatial representation.
Micro design:
In ConvNeXt's micro-design, several per-layer enhancements are applied in each block, by using: a single Gaussian error linear unit (GELU) activation function (instead of numerous ReLU), using a single LayerNorm as normalization choice (instead of numerous BatchNorm), and using separate down-sampling layers between stages.
§ RESULTS
First, we briefly describe used datasets with the implementation details. Then, we assess the compression efficiency of our method with a rate-distortion comparison and compute the average bitrate savings on four commonly-used evaluation datasets. We further elaborate a model scaling and complexity study to consistently examine the effectiveness of our proposed method against pioneering ones.
§.§ Experimental Setup
Datasets.
The training set of the CLIC2020 dataset is used to train the proposed ConvNeXt-ChARM model. This dataset contains a mix of professional and user-generated content images in RGB color and grayscale formats. We evaluate image compression models on four datasets, including Kodak <cit.>, Tecnick <cit.>, JPEG-AI <cit.>, and the testing set of CLIC21 <cit.>. For a fair comparison, all images are cropped to the highest possible multiples of 256 to avoid padding for neural codecs.
Implementation details.
We implemented all models in TensorFlow using tfc library <cit.>, and the experimental study was carried out on an RTX 5000 Ti GPU. All models were trained on the same CLIC2020 training set with 3.5M steps using the ADAM optimizer with parameters β_1=0.9 and β_2=0.999. The initial learning rate is set to 10^-4 and drops to 10^-5 for another 100k iterations, and L=R+λD as loss function.
The MSE is used as the distortion metric in RGB color space. Each batch contains eight random 256 × 256 crops from training images. To cover a wide range of rate and distortion, for our proposed method, we trained five models with λ∈{0.006, 0.009, 0.020, 0.050, 0.150}. Regarding the evaluation on CPU, we used an Intel(R) Xeon(R) W-2145 @ 3.70GHz.
Baselines.[For a fair comparison, we only considered SwinT-ChARM <cit.> from the state-of-the-art models <cit.>, due to the technical feasibility of models training and evaluation under the same conditions and in an adequate time.]
We compare our approach with the state-of-art neural compression method SwinT-ChARM proposed by Zhu <cit.>, and non-neural compression methods, including bpg(4:4:4), and the most up-to-date vvc official Test Model VTM-18.0 in All-Intra profile configuration.
§.§ Rate-Distortion coding performance
To demonstrate the compression efficiency of our proposed approach, we visualize the rate-distortion curves of our model and the baselines on each of the considered datasets.
Considering the Kodak dataset, Figure <ref> shows that our ConvNeXt-ChARM outperforms the state-of-the-art learned approach SwinT-ChARM, as well as the bpg(4:4:4) and VTM-18.0 traditional codecs in terms of PSNR. Regarding rate savings over VTM-18.0, SwinT-ChARM has more compression abilities only for low PSNR values.
Our model can be generalized to high resolution image datasets (Tecnick, JPEG-AI, and CLIC21), and can still outperform existing traditional and the learned image compression method SwinT-ChARM in terms of PSNR.
Besides the rate-distortion curves, we also evaluate different models using Bjontegaard's metric <cit.>, which computes the average bitrate savings (%) between two rate-distortion curves.
In Table <ref>, we summarize the BD-rate of image codecs across all four datasets compared to the VTM-18.0 as the anchor.
On average, ConvNeXt-ChARM is able to achieve 5.24% rate reduction compared to VTM-18.0 and 1.22% relative gain from SwinT-ChARM.
Figure <ref> shows the BD-rate (with VTM-18.0 as an anchor) versus the decoding time of various approaches on the Kodak dataset. It can be seen from the figure that our ConvNeXt-ChARM achieves a good tradeoff between BD-rate performance and decoding time.
§.§ Models Scaling Study
We evaluated the decoding complexity of the three considered image codecs by averaging decoding time across 7000 images at 256×256 resolution, encoded at 0.6 bpp. We present the image codec complexity in Table <ref>, including decoding time on GPU and CPU, floating point operations per second (GFLOPs), the memory required by model weights, and the total model parameters. The models run with Tensorflow 2.8 on a workstation with one RTX 5000 Ti GPU. The Conv-ChARM model refers to the Minnen <cit.> architecture with a latent depth of 320 and a hyperprior depth of 192, and can be considered as ablation of our model without ConvNeXt blocks. We maintained the same slice transform configuration of the charm for the three considered models. The total decoding time of SwinT-ChARM decoder is less than convnet-based decoder on GPU but is the highest on CPU. Our ConvNeXt-ChARM is lighter than the Conv-ChARM in terms of the number of parameters, which proves the ConvNeXt block's well-engineered design. Compared with SwinT-ChARM, our ConvNeXt-ChARM shows lower complexity, requiring lower training time with less memory consumption. In addition, Figure <ref> shows that our method is in an interesting area, achieving a good tradeoff between BD-rate score on Kodak, total model parameters, and MFLOPs per pixel, highlighting an efficient and hardware-friendly compression model.
§.§ Comparison with SwinT-ChARM
ConvNeXt-ChARM achieves good rate-distortion performance while significantly reducing the latency, which is potentially helpful to conduct, with further optimizations, high-quality real-time visual data transmission, as recently proposed in the first software-based neural video decoder running HD resolution video in real-time on a commercial smartphone <cit.>.
Since fewer works attempt to explicitly compare Swin Transformer and convnet-based blocks, here, we compare our ConvNeXt-ChARM with SwinT-ChARM under the same conditions and configurations. We found that a well-designed convnet, without any additional attention modules, can outperform the highly coveted Swin Transformer in learned transform coding in terms of BD-rate, with more visually pleasing reconstructions and comparable decoding latency. In addition, ConvNeXt-ChARM maintains the efficiency and maturity of standard convnet and the fully-convolutional nature for both training and inference.
There is no doubt that Transformers are excellent architectures with enormous potential for the future of various computer vision applications. However, their vast hunger for data and computational resources <cit.> poses a big challenge for the computer vision community. Taking SwinT-ChARM as an example, it needs, on average, ×1.33 more time than ConvNeXt-ChARM, to train on the same number of epochs.
§ CONCLUSION
In this work, we reconcile compression efficiency with ConvNeXt-based transform coding paired with a charm prior and propose an up-and-coming learned image compression model ConvNeXt-ChARM. Furthermore, we inherit the advantages of pure convnet in the proposed method to improve both efficiency and effectiveness. The experimental results, conducted on four datasets, showed that our approach outperforms previously learned and conventional image compression methods, creating a new state-of-the-art rate-distortion performance with a significant decoding runtime decrease.
Future work will further investigate efficient low-complexity entropy coding approaches to further enhance decoding latency.
With the development of GPU chip technology and the further optimization of engineering, learning-based codecs will be the future of coding, achieving better compression efficiency when compared with traditional codecs and aiming to bridge the gap to a real-time operation.
We hope our study will challenge certain accepted notions and prompt people to reconsider the significance of convolutions in computer vision.
IEEEbib
|
http://arxiv.org/abs/2307.05966v1 | 20230712072559 | Efficient Algorithm for Binary Quadratic Problem by Column Generation and Quantum Annealing | [
"Sota Hirama",
"Masayuki Ohzeki"
] | cond-mat.dis-nn | [
"cond-mat.dis-nn"
] |
Surfaces of constant principal-curvatures ratio in isotropic geometry
[
August 12, 2023
=====================================================================
Introduction.
Quantum annealing (QA) is known to be a method for solving generic combinatorial optimization problems <cit.>.
In particular, its physical realization, a quantum annealer, is expected to be a quick solver for the quadratic unconstrained binary optimization (QUBO) problems, which can be implemented therein.
Various applications of QA are proposed as in traffic flow optimization<cit.>,
finance <cit.>, logistics <cit.>, manufacturing <cit.>, preprocessing in material experiments<cit.>, marketing <cit.>, steel manifacturing <cit.>, and decoding problems <cit.>.
The model-based Bayesian optimization is also proposed in the literature <cit.>
A comparative study of quantum annealer was performed for benchmark tests to solve optimization problems <cit.>.
The quantum effect on the case with multiple optimal solutions has also been discussed <cit.>.
As the environmental effect cannot be avoided, the quantum annealer is sometimes regarded as a simulator for quantum many-body dynamics <cit.>.
Furthermore, applications of quantum annealing as an optimization algorithm in machine learning have also been reported <cit.>.
Unfortunately, due to its limitation, the current quantum annealer can not efficiently solve the combinatorial optimization problem with equality/inequality constraints, even by using the various techniques, at a satisfactory level <cit.>.
Most of the combinatorial optimization problems in practice implement various types of constraints.
In the classical computer, the combinatorial optimization problem is efficiently solved by various algorithms.
Elaborate classical algorithms can also inspire quantum computation.
In the present study, we propose the combination of QA with a classical method for solving combinatorial optimization problems with a large number of constraints, namely column generation <cit.>.
Our results demonstrate that our method can solve efficiently constrained combinatorial optimization problems.
Problem setting.
We solve the following type of constrained binary optimization problem:
min_ x{∑_ijQ_ij x_i x_j } s.t. ∑_ij A_kij x_i x_j ≤ b_k ∀ k , x_i∈{0, 1} ∀ i,
where Q_ij is the continuous-valued element of the QUBO matrix designing the cost function to be solved, A_kij is the continuous-valued element of the matrix, and b_i is the continuous-valued element of the vector, which define the equality/inequality constraints.
The variable x_i is binary as 0 or 1.
The number of variables is denoted as n.
We find a solution that minimizes the cost function while adhering to the constraints.
When we input the constrained optimization problem into the quantum annealer, we generally modify the cost function to describe the constraints as in the penalty method.
Column generation.
In our study, we instead utilize column generation, a popular algorithm for the constrained optimization problem <cit.>.
In particular, it efficiently solves the large-size linear programming problem with continuous variables.
We thus convert a quadratic problem into a linear programming problem to adapt column generation.
We use several techniques in convex optimization problems.
We define a convex combination as a linear combination of extreme points by considering a convex hull for some set.
x_a = ∑_p ∈𝒫 x^pλ^p, s.t.∑_p ∈𝒫λ^p = 1, λ^p ≥ 0, ∀ p
where x^k is an extreme point and 𝒫 is the index set of all the possible extreme points.
We use a convex combination to transform the binary quadratic programming problem.
We iteratively solve the effective optimization problem later and then obtain several solution vectors.
We take a convex hull of the solution-vector set.
First, we define a quadratic problem converted by convex combination<cit.>.
min_λ∑_p∈𝒫∑_ijQ_ij x^p_i x^p_j λ^p,
s.t. ∑_p∈𝒫∑_ij A_k ij x^p_i x^p_j λ^p ≤ b_i, ∀ k
∑_p∈𝒫λ^p = 1,
λ^p ≥ 0, ∀ p ∈𝒫
where x^p∈{0, 1}^n are binary constant vectors and 𝒫 is the index set of all the possible extreme points of the solution-vector set.
Here we set the variables λ^p.
We utilize the column generation method on this quadratic problem.
The column generation method is an algorithm that efficiently finds the optimal solution by starting the search from the minimum necessary extreme points and generating extreme points until the optimal solution is reached.
The computational time is highly reduced than considering the extreme points of the convex hull of all solution sets.
We use the Dantzig-Wolfe Decomposition to modify the quadratic problem by changing 𝒫 into its subset 𝒫̅, the restricted set of extreme points.
We call this the restricted master problem (RMP).
We here consider a dual problem of RMP.
The dual problem of linear programming problems can be obtained by interchanging the coefficients of the objective function and the right-hand side of the constraint.
max_ρ, π_0∑_i b_iρ_i + π_0,
s.t. ∑_k=1^m∑_ij A_k ij x^p_i x^p_j ρ_k + π_0 ≤∑_ijQ_ij x^p_i x^p_j, ∀ p∈𝒫̅
ρ≤ 0,
where ρ∈ℝ^n and π_0 ∈ℝ are the dual variables.
there is one dual variable for each explicit constraint in RMP.
A solution to a dual problem provides a good lower bound on RMP.
If there are points x_p∈𝒫∖𝒫̅, the answer of RMP is not optimal.
Central QUBO problem.
Therefore, the main issue is finding a solution that lowers the cost function of RMP.
We define a pricing problem that adds the new extreme point to 𝒫̅ lowering the cost function of RMP.
By solving the dual problem, we obtain the dual variables ρ^* and π_0^*.
We use these dual variables and define the pricing problem by
min_ x∑_ij{( Q_ij-∑_k=1^mρ^*_k A_kij) x_i x_j} - π_0^*,
s.t. x∈{0,1}^n,
where ρ^* and π_0^* are the solution of dual problem.
The objective function expresses the residual characterizing discrepancy in the constraints.
One can find that this problem takes the form of the quadratic unconstrained binary optimization (QUBO).
It is noticed that the resulting solution is approximate, depending on the precision to solve the QUBO problem.
However, the main issue is reduced to increasing the lower bound of the original optimization problem (or decreasing the upper bound in the case of a maximization problem).
One can assess the quality of the tentative solution while proceeding with the computation.
The point is to transform the original constrained optimization problem into the unconstrained one.
The quantum annealer can efficiently solve the QUBO problem even at the current level.
We incorporated simulated annealing (SA) <cit.> and QA into solving the pricing problem.
Below, we investigate the performance of our method.
Results.
We solved randomly generated problems using our proposed method and compared its performance to Gurobi, one of the fastest commercial solvers, as a benchmark.
The version of Gurobi used in this experiment was 10.0.1.
We generated our test problems under the following conditions : Q_ij∈{-1, 1}(i ≤ j), A_kij∈{-1, 1}(i≤ j) and b_k = 1.
We prepare 100 instances for each problem size.
First, we use QA to solve the pricing problem.
We use D-Wave quantum annealer, 2000Q, and Advantage.
The annealing time for QA was set to 500µs.
We include the communication time from Japan to Canada as the real computation time for QA.
Two types of solutions using Gurobi are used for comparison.
The exact solution is denoted by Gurobi, and the approximate solution is denoted by R-Gurobi up to the value of the result derived by QA.
We set the calculation time limit as 1800 seconds (30 minutes) for R-Gurobi.
Next, we discuss the result between our algorithm and R-gurobi. The QA (2000Q) and R-Gurobi result is shown in Fig. <ref>.
The comparison results are shown in Fig. <ref> shows R-Gurobi is faster than QA (2000Q) and QA (Advantage) up to 50.
However, when the problem size reaches 60, both QAs reach an approximate solution faster.
It is often said that the quantum annealer can efficiently solve small-size problems.
However, our method overcomes the Gurobi using the advantage to solve the binary quadratic problem, which finds the extreme points.
The bottleneck of QA is how to deal with the constraints.
One usually uses the penalty method, which demands a large coefficient value in the cost function, resulting in the decaying precision of the solution.
We avoid the penalty method and instead use the column generation while using the QUBO.
Unfortunately, the precision of the solution by QA at the current level is worse than SA on the classical computer in general.
We test our method by SA hereafter.
We use OpenJij to perform SA.
The parameters of SA (OpenJij) were used as default settings.
We can test our method for larger-size problems up to 160 in the classical computer.
The result of SA (OpenJij) and Gurobi is shown in Fig. <ref>.
The results show that for all the problem sizes, our method using SA completes the computation faster than R-Gurobi.
We compute the ratio time of QA (2000Q, Advantage) and SA to R-Gurobi as in Table <ref>.
The results show how faster QA and SA can find solutions than R-Gurobi.
The results (Table <ref>) show that QA (2000Q) is 3.75 times faster in finding a solution when the problem size is 60, while Advantage is 2.914 times faster when the problem size is 60.
SA takes the maximum ratio value when the problem size is 70 and can find the solution 1001 times faster than R-Gurobi.
However, the R-Gurobi tends to cease its computation due to a time limitation of up to 30 minutes.
Thus the potential of our method to speed up the large problem size would be more impact.
We investigate the precision of our method in comparison with the exact solver, Gurobi, as in Fig. <ref>.
The results show that the larger the problem size, the smaller the approximate solution's energy deviates from the exact solution's energy.
This is because the large size means an increase of the constraints in our tests.
Thus column generation works well in large-size cases.
Conclusion
In this letter, we have verified that the column generation algorithms introducing QA and SA efficiently solve binary quadratic programming problems with several constraints.
The column generation with QA(2000Q) was up to 3.750 times faster than R-Gurobi, QA(Advantage) was up to 2.914 times faster than R-Gurobi, and SA was potentially up to 1001 times faster than R-Gurobi.
We consider why this algorithm can find an approximate solution faster than Gurobi: Gurobi is an algorithm that tries to find an exact solution by iterative calculations using branch-and-bound and other methods, while our algorithm tries to derive an approximate solution by using a column generation method.
The time-consuming part of our algorithm is the pricing problem.
Since we could reduce the computation time using QA or SA, we could derive an approximate solution faster than Gurobi.
The QUBO problem, which is hard to solve, demands an exponentially long time, depending on the problem size.
Thus most of the algorithms rely on various heuristics and approximations via semi-definite programming.
Our method, in other words, by use of SA and QA is also in the line of this direction to solve the QUBO problem.
Moreover, the reason why SA was able to solve the problem faster than QA is that QA includes the communication time from our point to Canada in the computation time.
In contrast, SA only needs simple computation time without communicating on the local classical computer.
We here emphasize one of the advantages points of our method.
Our method can solve the quadratic binary optimization problem without the penalty method.
Thus we do not need any parameter tuning to solve the optimization problem with constraints in QA.
It is important to test our method in QPLIB, a library of quadratic programming instances <cit.>.
Since our method is based on column generation, it would perform better against the optimization problem with more constraints.
More investigations in this direction would be more important.
Acknowledgement.
This work is supported by JSPS KAKENHI Grant No. 23H01432.
jpsj
|
http://arxiv.org/abs/2307.07333v1 | 20230714132442 | SynTable: A Synthetic Data Generation Pipeline for Unseen Object Amodal Instance Segmentation of Cluttered Tabletop Scenes | [
"Zhili Ng",
"Haozhe Wang",
"Zhengshen Zhang",
"Francis Tay Eng Hock",
"Marcelo H. Ang Jr"
] | cs.CV | [
"cs.CV"
] |
SynTable: A Synthetic Data Generation Pipeline for Unseen Object Amodal Instance Segmentation of Cluttered Tabletop Scenes
Zhili Ng^* Haozhe Wang^*,† Zhengshen Zhang^* Francis Tay Eng Hock Marcelo H. Ang Jr.
National University of Singapore
{ng.zhili, wang_haozhe, zhengshen_zhang}@u.nus.edu, {mpetayeh, mpeangh}@nus.edu.sg
August 12, 2023
==========================================================================================================================================================================================================================
empty
In this work, we present SynTable, a unified and flexible Python-based dataset generator built using NVIDIA's Isaac Sim Replicator Composer for generating high-quality synthetic datasets for unseen object amodal instance segmentation of cluttered tabletop scenes. Our dataset generation tool can render a complex 3D scene containing object meshes, materials, textures, lighting, and backgrounds. Metadata, such as modal and amodal instance segmentation masks, occlusion masks, depth maps, bounding boxes, and material properties, can be generated to automatically annotate the scene according to the users' requirements. Our tool eliminates the need for manual labeling in the dataset generation process while ensuring the quality and accuracy of the dataset. In this work, we discuss our design goals, framework architecture, and the performance of our tool. We demonstrate the use of a sample dataset generated using SynTable by ray tracing for training a state-of-the-art model, UOAIS-Net. The results show significantly improved performance in Sim-to-Real transfer when evaluated on the OSD-Amodal dataset. We offer this tool as an open-source, easy-to-use, photorealistic dataset generator for advancing research in deep learning and synthetic data generation.
*These authors contributed equally to this work
†Corresponding author.
§ INTRODUCTION
Amodal completion is a perceptual ability that enables the perception of whole objects, even when they are partially occluded <cit.>. Humans are capable of “filling in" the occluded appearance of invisible objects, owing to their vast experience in perceiving countless objects in various contexts and scenes. This ability to perceive an entire object based on its partial appearance is crucial in enabling reliable and accurate planning of subsequent actions. In modern autonomous robotic systems, comprehending objects in an environment is essential for numerous applications. In vision-based robotic grasping, deep learning perception algorithms are important in extracting critical information about objects in a scene, including their category, shape, appropriate grasping points, and occlusion information. In robotic grasping systems, the ability to infer an occluded object's complete structure from its visible components allows the robot to plan the grasp order and safely grasp novel occluded objects in cluttered scenes with greater precision and robustness.
There are three acute robotic perception challenges related to amodal instance segmentation: Firstly, the lack of large-scale, high-quality datasets for unseen object amodal instance segmentation (UOAIS) prevents some vision-based grasping systems from achieving their best performance <cit.>. There are several well-known grasp generation datasets <cit.>. Still, very few datasets focus on UOAIS <cit.>. This could be because such datasets are challenging to annotate manually and accurately. While it is possible to annotate grasp poses manually <cit.>, human-annotated amodal instance segmentation masks <cit.> could be highly subjective and prone to errors. They are therefore considered imperfect ground truths.
Secondly, although it is possible to generate large synthetic datasets in a simulation environment <cit.>, the issue of visual domain mismatch results in a poor Sim-to-Real transfer that will inevitably reduce the performance of algorithms in real-world applications. This could be because of a lack of domain randomization or because the software generates non-photorealistic scenes for synthetic data.
Thirdly, there is currently a lack of tools available to automatically generate amodal instance segmentation mask annotations to train the amodal completion capability of robotic grasping systems. There is also no effective metric to accurately evaluate a perception model's ability to determine the occlusion order relationship of objects in a scene. The capacity for amodal completion and adequate comprehension of occlusion relationships between objects within a given scene are crucial for robots working in highly cluttered environments. Amodal completion can aid the planning of pick sequences and avoid collisions with other objects in grasping tasks. However, creating amodal annotations is a task that demands significantly more effort and skill from human annotators than creating modal annotations. Consequently, obtaining amodal annotations through manual annotation is an extremely time-consuming and costly process. For this reason, generating amodal annotations using simulation tools is a much more desirable option.
In this work, we aim to address these three challenges by building a unified and flexible Python-based tool to generate customizable, high-quality, and photorealistic datasets for UOAIS of cluttered tabletop scenes. We focus our work on cluttered tabletop scenes as tabletops are a common place where grasping and manipulation tasks are performed. We develop a novel unified framework that combines the rendering and annotating of scenes and objects in a dataset into a single pipeline. Users can customize the type of scene rendered, the number and variety of objects in the scene, and the types of annotations required for their dataset. Our tool is built on NVIDIA's Isaac Sim Replicator Composer to leverage its high-fidelity graphics rendering capability.
Our key contributions are summarized as follows:
* We develop a pipeline to automatically render photorealistic cluttered tabletop scenes and generate ground truth amodal instance segmentation masks. This eliminates the need for any manual labeling in the dataset generation process while ensuring the quality and accuracy of the dataset. Based on our pipeline, we designed a dataset generation tool to create photorealistic and accurately-labeled custom datasets for UOAIS (refer to Figure <ref>(a)). Our tool can be easily added to NVIDIA Isaac Sim to leverage its ability to render complex and photorealistic 3D scenes with high fidelity.
* Our tool provides a rich set of annotations related to amodal instance segmentation (refer to Figure <ref>(b)): modal (visible) and amodal instance segmentation masks, occlusion masks, occlusion rates, and occlusion order adjacency matrix. Users can easily select which annotations to include in their dataset based on the requirements of their application.
* We proposed a novel method to evaluate how accurately an amodal instance segmentation model can determine object occlusion ordering in a scene by computing the scene's Occlusion Order Accuracy (ACC_OO).
* We generate a large-scale sample synthetic dataset using our tool consisting of amodal instance segmentation labels for users to train and evaluate their models on novel objects. Our dataset will be made publicly available.
§ RELATED WORK
§.§ Amodal Instance Segmentation in Robotics
In recent years, there has been growing interest in developing amodal instance segmentation methods to enable more robust object detection and tracking in complex scenes. However, several limitations have been encountered in research on amodal instance segmentation in robotic grasping, such as the dearth of large-scale, high-quality training data and visual domain mismatch resulting in poor Sim-to-Real transfer.
Lack of Large-scale High-quality Training Data. Recently, a number of amodal datasets <cit.> have been developed for indoor scenes. However, there is a lack of high-quality datasets designed for amodal instance segmentation related to robotic grasping and manipulation tasks in cluttered tabletop scenes. Gilles <cit.> recently presented a new benchmark dataset for evaluating the performance of ambidextrous grasping systems in bin-picking scenarios. Nevertheless, their work is limited in terms of scene and object variety and may not be widely applicable beyond the bin-picking scenario. Richtsfeld <cit.> proposed the Object Segmentation Database (OSD) to facilitate the development of object segmentation algorithms in robotics and computer vision systems. Suchi <cit.> designed a semi-automatic labeling tool to produce the Object Cluttered Indoor Dataset (OCID). Despite these efforts, neither OSD nor OCID includes amodal instance segmentation masks in their annotations. Recently, Back <cit.> introduced amodal annotations to OSD. Nevertheless, this was done manually, which could be a time-consuming and error-prone process.
Sim-to-Real Problem. Xie <cit.> created the Tabletop Object Dataset (TOD), a dataset comprising scenes that lack photorealism. This characteristic, coupled with the lack of domain randomization, might result in a significant Sim-to-Real gap. The UOAIS-Sim Dataset <cit.> also shares this limitation, as it lacks domain randomization as well.
§.§ Tools for Generating Synthetic Datasets
With the rapid development of deep learning, the demand of researchers for synthetic datasets has increased in recent years, which has led to the rise of the development of various tools for generating these datasets <cit.>. For robotics and computer vision applications, PyBullet and MuJoCo <cit.> are commonly used physical simulators for generating synthetic data. Xie <cit.> pre-trained an RGB-D unseen object instance segmentation model using PyBullet. Tobin <cit.> used MuJoCo to generate synthetic images with domain randomization, which can bridge the Sim-to-Real gap by realistically randomizing 3D content. Simulation tools such as PyBullet and MuJoCo typically come with renderers that are accessible and flexible, but they lack physically based light transport simulation, photorealism, material definitions, and camera effects.
To obtain better rendering capabilities, researchers also explored the use of video game-based simulation tools, such as Unreal Engine (UE4) or Unity 3D. For example, Qiu and Yuille <cit.> exported specific metadata by adding a plugin to UE4. Besides, Unity 3D can generate metadata and produce scenes for computer vision applications using the official computer vision package. Although game engines provide the most advanced rendering technology, they prioritize frame rate over image quality and offer limited capabilities in light transport simulation.
Ray-tracing technology has gained significant traction in creating photorealistic synthetic datasets, as it enables the simulation of light behavior with high accuracy. Software applications such as Blender, NVIDIA OptiX, and NVIDIA Isaac Sim have all incorporated ray-tracing techniques into their functionality. The Replicator Composer, a component of NVIDIA Isaac Sim, constitutes an excellent tool for creating tailored synthetic datasets to meet various requirements in robotics. In this work, we leverage this platform to design a customized pipeline to generate a synthetic dataset tailored to the specific demands of UOAIS for cluttered tabletop scenes.
§ METHOD
It is important to note that at the time of our work (in Isaac Sim version 2022.1.0), NVIDIA Isaac Sim Replicator Composer does not natively support the functionality required to generate an amodal instance segmentation dataset. In light of this limitation, we have undertaken extensive modifications to the Replicator Composer's original source code, thereby introducing new features that enable the creation of a unified dataset generation pipeline.
Our dataset generation pipeline is illustrated using the diagram in Figure <ref>. We use a YAML file to store the parameters and configurations of the scenes to be rendered. The objects and settings required to render the scene are retrieved based on the instructions in the configuration file. We collectively term the objects, materials, and light sources used in our pipeline as assets. Thereafter, the tabletop scene with objects floating above the table is rendered in Isaac Sim. We then run a physical simulation to drop the rendered objects onto the table. For every view within a scene, camera viewpoints and lighting conditions are re-sampled. Subsequently, ground truth annotations are obtained and systematically recorded to create the dataset.
§.§ Preparing Each Scene
The method to prepare each scene is shown in Figure <ref>. A table is randomly sampled from the assets in Omniverse Nucleus and is rendered at the center of a room. The texture and materials of the table, ceiling, wall, and floor are randomized for every scene to ensure domain randomization. The objects are added to the scene with randomized x, y, and z coordinates and orientations. We randomly sample (with replacement) N_lower to N_upper objects to render for each scene. By default, N_lower=1, N_upper=40. Each object is initialized with real-life dimensions, randomized rotations and coordinates, allowing for diverse object arrangements across scenes. Each object also has mass and collision properties so that they can be dropped onto the tabletop in our physics simulation.
§.§ Physical Simulation of Each Scene
Upon completing the scene preparation, the rendered objects are dropped onto the table surface using a physics simulation. The simulation is paused after t seconds (t=5 by default), halting any further movement of the objects. During the simulation, any objects that rebound off the tabletop surface and fall outside the spatial coordinate region of the tabletop surface (i.e., either below the table or beyond the width and length of the table) are automatically removed. This is necessary to prevent the inclusion of extraneous and irrelevant objects outside the specified tabletop region during the annotation process from different viewpoints.
§.§ Sampling of Camera Viewpoints
To capture annotations for each scene from multiple viewpoints, we enhance the approach by Gilles . <cit.>—which only uses fixed viewpoint positions—by introducing a feature that captures V number of viewpoints at random positions within two concentric hemispheres, as illustrated in Figure <ref>. V can be set by the user. The radii of the two concentric hemispheres are uniformly sampled within the range r_view_lower m to r_view_upper m, where r_view_lower and r_view_upper are defined in Equations <ref> and <ref>. Users may also set fixed values for r_view_lower and r_view_upper should they wish to do so.
r_view_lower = max(w/2, l/2)
r_view_upper = 1.7 × r_view_lower
The hemisphere’s spherical coordinates are parameterized using three variables r_view, u, and v. To generate the camera coordinates in the world frame, we first obtain the radius of the hemisphere r_view by uniform sampling between r_view_lower and r_view_upper. Next, we uniformly sample u,v ∈ [0,1], then substitute all the sampled values into Equations <ref>, <ref> and <ref> to compute the cartesian coordinates of the camera.
x = r_viewsin(arccos(1-v))cos(2 π u)
y = r_viewsin(arccos(1-v))sin(2 π u)
z = r_viewcos(arccos(1-v))
Once the camera coordinates are set, the orientation of each camera is set such that each viewpoint looks directly at the center of the tabletop surface (0, 0, h).
§.§ Sampling of Lighting Conditions
To simulate different indoor lighting conditions, we resample L spherical light sources between L_lower to L_upper for each viewpoint (Figure <ref>). By default, we set L_lower and L_upper to be 0 and 2, respectively. To position L spherical light sources for a viewpoint, we adopt a similar approach to the camera viewpoint sampling method discussed in Section <ref>. In contrast to the approach by Back . in <cit.>, we use spherical light sources that emit light in all directions. Furthermore, we uniformly sample light source temperatures between 2,000 K to 6,500 K. The default light intensity of each light source is uniformly sampled between 100 lx to 20,000 lx, and the default light intensity of ceiling lights in the scene is also sampled uniformly between 100 lx to 2,000 lx. To achieve diverse indoor lighting conditions for tabletop scenes, users have the flexibility to adjust the number of spherical light sources, as well as their intensities and temperatures.
Similar to the sampling method for the camera viewpoint coordinates, we have designed a feature that samples the lower and upper radii bounds for the light sources based on the camera hemisphere's upper bound radius, r_view_upper. The sampled lower and upper bound radii constraints for the lighting hemisphere r_light_lower and r_light_upper are as follows:
r_light_lower = r_view_upper + 0.1 m
r_light_upper = r_light_lower + 1 m
§.§ Capturing of Ground Truth Annotations
The process of capturing the ground truth annotations for a scene is illustrated in Figure <ref>. At each view, the RGB and depth images of the tabletop scene will be captured (Figure <ref>(a)). The built-in instance segmentation function in Isaac Sim Replicator Composer is employed to capture the instance segmentation mask of the entire scene from a viewpoint (Figure <ref>(b)). Subsequently, each object’s visible mask is cropped from the instance segmentation mask of the scene. To obtain the amodal mask of each object on the simulated tabletop scene, we have developed the subsequent steps.
Initially, all objects' visibility is disabled. For each object o within the scene, its visibility is enabled, and the instance segmentation function is utilized to capture its amodal mask (Figure <ref>(c)). Following this, we compute the object's occlusion mask and occlusion rate, as presented in (Figure <ref>(d)). The occlusion mask of an object o can be acquired by subtracting its visible mask from its amodal mask:
occlusionMask = amodalMask - visibleMask
The occlusion rate of the object o can be computed by dividing the number of pixels in the occlusion mask by the number of pixels in the amodal mask. If the occlusion rate of the object o is equal to 1, it implies that object o is completely obscured from the viewpoint, thus we do not save the object o’s annotation for this view. The visibility of object o is then disabled to capture the masks of the next object. Following the preservation of all objects' masks, we use Algorithm <ref> to generate the Occlusion Order Adjacency Matrix (OOAM) for this viewpoint (Figure <ref>(e)). For a scene with M objects, the OOAM contains M × M elements, where the element (i, j) is a binary value in the matrix which indicates whether object i occludes object j. Given the OOAM, we can easily construct the Occlusion Order Directed Acyclic Graph (OODAG) to visualize the occlusion order in the viewpoint (Figure <ref>(e)). We provide a detailed explanation of the OODAG in our supplementary materials. After that, the visibility of all objects is enabled to prepare for the capturing of annotations from the next viewpoint of the scene.
§.§ Saving of Ground Truth Annotations
We saved the RGB and depth images as PNG images. The OOAM of the objects in each image is saved as a NumPy file. The amodal, visible, and occlusion masks are saved as Run-length Encoding (RLE) in COCO JSON format to optimize disk space used by the generated datasets. We also recorded each object’s visible bounding box, image ID, and object name in the generated COCO JSON file.
§ DATASET DETAILS
To demonstrate the capabilities of SynTable, we generated a sample synthetic dataset, SynTable-Sim, using our pipeline, for training and evaluating UOAIS models. Note that users can also generate other custom datasets that meet the specific requirements of their application using SynTable.
§.§ Object Models Used in Generating SynTable-Sim
We use 1075 object CAD models from the Google Scanned Objects dataset <cit.> and the Benchmark for 6D Object Pose Estimation (BOP) <cit.> for generating our train dataset. The Google Scanned Objects dataset features over 1030 photorealistic 3D scanned household objects with real-life dimensions, and BOP features 3D object models from household and industrial objects. As the context of our work is in robotic grasping, we only select objects that a parallel gripper can grasp to be part of our dataset. Upon inspection of the Google Scanned Objects dataset, we filter out invalid objects that contain more than two instances in each model and keep the remaining 891 valid objects for our training dataset. From the BOP, we exclude 21 objects from the YCB-Video dataset that we include in our validation dataset and use the remaining 184 objects for our training dataset. We also create a synthetic validation set using 78 novel objects from the YCB dataset <cit.>. We sample a table object from 10 Omniverse Nucleus table assets to provide randomization for each scene. To load the 3D object models into Isaac Sim, we convert the OBJ and texture files to the Universal Scene Description (USD) format.
§.§ Dataset Configuration
With 50 viewpoints for each scene, we generated 900 scenes to create 45,000 RGB-D images for the training dataset and 100 scenes to create 5,000 RGB-D images for the validation dataset. N_lower=1 to N_upper=40 objects are rendered on randomly textured tabletop planes in each scene. We used 130 materials from the Omniverse Nucleus material assets to be applied randomly on the walls, floor, and table for domain randomization purposes. L_lower=0 to L_upper=2 spherical lights are sampled for each scene. The viewpoint and lighting hemisphere parameters are automatically sampled based on the table dimensions. The camera parameters used are horizontal aperture: 2.63, vertical aperture: 1.96, and focal length: 1.88 to mimic the configuration of the RealSense LiDAR Camera L515. The rest of the parameters follow the default configurations of the pipeline.
§.§ Syntable-Sim Versus Other Cluttered Tabletop Datasets
We compare our SynTable-Sim dataset with several other cluttered tabletop datasets from previous works in Table <ref>. Our tabletop dataset is the only one providing all annotations related to amodal instance segmentation. Our dataset also has the most extensive variety of objects, the highest number of occlusion instances, and the second highest average occlusion rate. These characteristics make our dataset very challenging for amodal instance segmentation tasks.
§ EXPERIMENTS
In this section, we present the results of our experiments aimed at evaluating the effectiveness of our dataset generation pipeline in producing synthetic datasets with good Sim-to-Real transfer performance. We use our SynTable-Sim sample dataset to train a state-of-the-art (SOTA) UOAIS model, UOAIS-Net <cit.>. UOAIS-Net is then evaluated on the SynTable-Sim validation set and OSD-Amodal <cit.> test set. To the best of our knowledge, the OSD-Amodal dataset is the only publicly available real-world tabletop scene dataset that provides amodal ground truth masks.
§.§ Training Strategy
We train UOAIS-Net on the UOAIS-Sim tabletop and SynTable-Sim datasets using an NVIDIA Tesla V100 GPU with 16 GB of memory. For both datasets, we use 90% of the images for training and 10% for validation. To train UOAIS-Net using the UOAIS-Sim tabletop dataset, we use the same hyperparameters as Back . <cit.>. To train UOAIS-Net with SynTable-Sim, we modified the depth range hyperparameter, which is used to preprocess input depth images. Specifically, we changed the range from the 2500 mm to 40000 mm range set by Back . to a narrower range of 250 mm to 2500 mm. This adjustment is required because our dataset reflects real-world proportions and has a smaller depth range than the UOAIS-Sim dataset.
§.§ Evaluation Metrics
We measure the performance of UOAIS-Net on the following traditional metrics <cit.>: Overlap P/R/F, Boundary P/R/F, and [email protected] for the amodal, visible, and invisible masks. Overlap P/R/F and Boundary P/R/F evaluate the whole area and the sharpness of the prediction, respectively, where P, R, and F are the precision, recall, and F-measure of instance masks after the Hungarian matching, respectively. [email protected] is the percentage of segmented objects with an Overlap F-measure greater than 0.75. We also report the accuracy (ACC_𝒪) and F-measure (F_𝒪) of occlusion classification, where ACC_𝒪 = δ/α, F_𝒪 = 2P_oR_o/P_o+R_o, P_o = δ/β, R_o = δ/γ. α is the number of the matched instances after the Hungarian matching. β, γ, and δ are the number of occlusion predictions, ground truths, and correct predictions, respectively. We provide more details about the evaluation metrics in our supplementary materials.
Due to the subjectivity of the objects’ invisible masks, evaluating UOAIS model performance solely based on the overlap and boundary P/R/F of segmented objects may be inaccurate. The current UOAIS occlusion evaluation metrics measure how well the model can predict if individual objects are occluded. However, these metrics do not measure object occlusion ordering in a scene which would be helpful in scene understanding for robotic grasp planning. The OOAM can represent a scene's occlusion order, while the OODAG can be derived from the OOAM. By knowing the occlusion order in a scene, a robot can use topological sorting of the predicted OODAG to plan the order of grasping to reach the occluded object of interest. Thus, to evaluate the accuracy of object occlusion order in an image, we introduce a new metric called Occlusion Order Accuracy (ACC_OO) as stated in Equation <ref>.
ACC_OO= sum(similarityMatrix) - gtOOAMDiagonalSize/gtOOAMSize - gtOOAMDiagonalSize
In Equation <ref>, similarityMatrix is the element-wise equality comparison between the ground truth OOAM, gtOOAM, and the predicted OOAM, predOOAM. As an object cannot occlude itself, the diagonal of any OOAM is always 0. Thus, we subtract the number of elements along the diagonal of gtOOAM, gtOOAMDiagonalSize, from the calculation of ACC_OO. ACC_OO is used to evaluate the model's ability to accurately determine the order of occlusions in a clutter of objects by comparing the OOAM generated by the model to the ground truth OOAM using Algorithm <ref>.
§.§ Results
Table <ref> compares the performance of UOAIS-Net on the OSD-Amodal dataset after training on the UOAIS-Sim tabletop dataset and our SynTable-Sim sample dataset, we can see that the UOAIS-Net trained on the SynTable-Sim dataset outperforms the UOAIS-Net trained on the UOAIS-Sim tabletop dataset in all metrics except F_𝒪. Most importantly, the OOAM accuracy of UOAIS-Net improves by 6.6% after being trained on the SynTable-Sim. This remarkable improvement shows that UOAIS-Net trained using a SynTable-generated synthetic dataset can determine the object occlusion order in a cluttered tabletop scene much more accurately than when it is trained on the UOAIS-Sim tabletop dataset. We provide images of the inference results on the OSD-Amodal dataset in the supplementary materials. A detailed breakdown of the precision P, recall R, and F-measure F, and [email protected] scores for the amodal, invisible, and visible masks are shown in Table <ref>. We observe that except for the Boundary precision scores of the amodal and invisible masks, UOAIS-Net achieves substantial improvements in the precision, recall, and F-measure scores for the amodal, invisible, and visible masks. Similarly, from Table <ref>, the UOAIS-Net model trained on the SynTable-Sim dataset also outperforms the one trained on UOAIS-Sim tabletop dataset in all metrics when both models are benchmarked on SynTable-Sim validation dataset. Our experiments demonstrate the effectiveness of our proposed dataset generation pipeline, SynTable, in improving the Sim-to-Real transfer performance of SOTA deep learning computer vision models for UOAIS. These results also highlight the potential of SynTable for addressing the challenge of annotating amodal instance segmentation masks.
§ CONCLUSION
In conclusion, we present SynTable, a novel synthetic data generation pipeline for generating photorealistic datasets that facilitated amodal instance segmentation of cluttered tabletop scenes. SynTable enables the creation of complex 3D scenes with automatic annotation of diverse metadata, eliminating the need for manual labeling while ensuring dataset quality and accuracy. We demonstrated the effectiveness of SynTable by generating an amodal instance segmentation dataset and using it to train UOAIS-Net. As a result, UOAIS-Net achieved significantly improved Sim-to-Real transfer performance on the OSD-Amodal dataset, particularly in determining the object occlusion order of objects in a cluttered tabletop scene. We offer SynTable as a contribution to the academic community, with the hope that it will prove useful for various research endeavors.
ieee_fullname
empty
§ APPENDIX
This supplementary material offers dataset visualization, qualitative results, and additional technical details to support the main paper. Section <ref> provides a comprehensive elaboration of the evaluation metrics employed. Furthermore, Section <ref> delineates the process of generating an occlusion order directed acyclic graph from the occlusion order adjacency matrix to classify objects in three distinct order layers. Lastly, Section <ref> showcases some qualitative inference results of UOAIS-Net on the OSD-Amodal dataset.
§ DETAILS ABOUT EVALUATION METRICS
In this paper, we employ the precision/recall/F-measure (P/R/F) metrics, as defined in <cit.>. This metric favors methods that accurately segment the desired objects while penalizing those that produce false positives. Specifically, the precision, recall, and F-measure are calculated between all pairs of predicted and ground truth objects. The Hungarian method, employing pairwise F-measure, is utilized to establish a match between predicted objects and ground truth. Given this matching, the Overlap P/R/F is computed by:
P=∑_i|c_i∩ g(c_i)|/∑_i|c_i|, R=∑_i|c_i∩ g(c_i)|/∑_j|g_j|
F=2 P R/P+R
where c_i denotes the set of pixels belonging to predicted object i, g(c_i) is the set of pixels of the matched ground truth object of c_i after Hungarian matching, and g_j is the set of pixels for ground truth object j.
Although the aforementioned metric provides valuable information, it fails to consider the boundaries of the objects. Therefore, Xie <cit.> proposed the Boundary P/R/F measure to supplement the Overlap P/R/F. The calculation of Boundary P/R/F involves the same Hungarian matching as used in the computation of Overlap P/R/F. Given these matchings, the Boundary P/R/F is computed by:
P=∑_i|c_i∩ D[g(c_i)]|/∑_i|c_i|, R=∑_i|D[c_i] ∩ g(c_i)|/∑_j|g_j|
F=2 P R/P+R
Here, overloaded notations are used to represent the sets of pixels belonging to the boundaries of the predicted object i and the ground truth object j as c_i and g_j, respectively. The dilation operation is denoted by D[·], which allows for some tolerance in the prediction. The metrics we use are a combination of the F-measure described in <cit.> and the Overlap P/R/F as defined in <cit.>.
In our work, we use the Overlap and Boundary P/R/F evaluation metrics to evaluate the accuracy of the predicted visible, invisible, and amodal masks. In the context of the Overlap P/R/F metrics, c_i denotes the set of pixels belonging to the predicted visible, invisible, and amodal masks, g(c_i) denotes the set of pixels belonging to the matched ground-truth visible, invisible and amodal masks annotations, and g_j is the ground-truth visible, invisible and amodal mask. The meaning of c_i, g(c_i), and g_j are similar in the context of the Boundary P/R/F metrics.
An additional vital evaluation metric used in our paper is the [email protected]. This metric represents the proportion of segmented objects with an Overlap F-measure greater than 0.75. It is important not to confuse this metric with the F-measure computed for the Overlap and Boundary P/R/F. The F-measure for Overlap and Boundary is a harmonic mean of a model's average precision and average recall, while [email protected] indicates the percentage of objects from a dataset that can be segmented with high accuracy. The F in [email protected] refers to the F-measure computed for a ground truth object after the Hungarian matching of the ground truth mask j with the predicted mask i as defined in <cit.> and stated in Equation (<ref>).
P_i j=|c_i ∩ g_j|/|c_i|, R_i j=|c_i ∩ g_j|/|g_j|
F_i j=2 P_i j R_i j/P_i j+R_i j
The notation c_i denotes the set of pixels that belong to a predicted region i, while g_j represents all the pixels that belong to a non-background ground truth region j. In addition, P_i j represents the precision score, R_i j represents the recall score, and F_i j represents the F-measure score that corresponds to this particular pair of predicted and ground truth regions.
§ OCCLUSION ORDER DIRECTED ACYCLIC GRAPH (OODAG)
After obtaining the Occlusion Order Adjacency Matrix (OOAM), we can generate the occlusion order directed graph from it. For each non-zero entry (i, j) in the OOAM, we draw a directed edge from node i to node j. If the entry is zero, we do not draw an edge. A non-zero entry at (i,j) represents that object i is occluding object j.
For example, the OOAM generated in Figure <ref> shows that (i,j)=(1,12) where i and j are the object indices (the bounding box labels) in the image. This means that object 1 occludes object 12 and a directed edge will point from object 1 to 12. From the generated Directed Occlusion Graph, we can also check if the graph is cyclic or acyclic using graph cyclic detection methods such as Depth First Search (DFS) and Breadth First Search (BFS). Only if the graph has no directed cycles (Directed Acyclic Occlusion Graph) can topological sorting be implemented to find the picking sequence to safely grasp all objects in the scene.
In the generated Occlusion Order graph, we further classify objects in three different order layers - Top, Intermediate and Bottom. Objects at the top layer represent objects that are not occluded by any other object and can be grasped directly. Objects in the intermediate layers means that they are occluded but they also occlude other objects. For objects in the bottom layer, they are occluded but they do not occlude other objects.
§ QUALITATIVE INFERENCE RESULTS OF UOAIS-NET ON THE OSD-AMODAL DATASET
After conducting training of the UOAIS-Net model <cit.> on both SynTable-Sim and UOAIS-Sim (tabletop) datasets <cit.>, we present some of our qualitative results in Figure <ref>. As discussed in the main text of our paper, the UOAIS-Net trained on SynTable-Sim dataset exhibits superior performance in contrast to UOAIS-Net trained on UOAIS-Sim tabletop dataset. This observation is further supported by the inference results presented in Figure <ref>. Furthermore, as the scene becomes more and more cluttered, the UOAIS-Net model trained on the SynTable-Sim dataset evidently outperforms that of the UOAIS-Net trained on the UOAIS-Sim tabletop dataset. In the context of robotic grasping on cluttered tabletops, foreground masking algorithms can be utilized to filter out the background and out-of-the-table predicted object instances.
|
http://arxiv.org/abs/2307.05101v1 | 20230711082144 | Summary characteristics for multivariate function-valued spatial point process attributes | [
"Matthias Eckardt",
"Carles Comas",
"Jorge Mateu"
] | stat.ME | [
"stat.ME",
"stat.AP"
] |
Summary characteristics for multivariate function-valued spatial point process attributes
Yongxin Chen, Tryphon T. Georgiou and Michele Pavon
Y. Chen is with the School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA;[email protected]
T.T. Georgiou is with the Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA 92697, USA; [email protected]
M. Pavon is with the Division of Science, New York University Abu Dhabi, U.A.E.; [email protected]
This work was supported in part by the NSF under grants 1942523 and 2206576, the AFOSR under FA9550-23-1-0096, the ARO under W911NF-22-1-0292, and the NYUAD under grant 76/71260/ADHPG.
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Matthias Eckardt^a, Carles Comas^b and Jorge Mateu^c
^a Chair of Statistics, Humboldt-Universität zu Berlin, Spandauer Strasse 1 , Berlin, Germany
^b Department of Mathematics, Universitat de Lleida, Av. Alcalde Rovira Roure 191,
Lleida, Spain
^c Department of Mathematics, Universitat Jaume I, E-12071, Castellón, Spain
Prompted by modern technologies in data acquisition,
the statistical analysis of spatially distributed function-valued quantities has attracted a lot of attention in recent years. In particular, combinations of functional variables and spatial point processes yield a highly challenging instance of such modern spatial data applications. Indeed, the analysis of spatial random point configurations, where the point attributes themselves are functions rather than scalar-valued quantities, is just in its infancy, and extensions to function-valued quantities still remain limited. In this view, we extend current existing first- and second-order summary characteristics for real-valued point attributes to the case where in addition to every spatial point location a set of distinct function-valued quantities are available. Providing
a flexible treatment of more complex point process scenarios, we build a framework to consider points with multivariate function-valued marks, and develop sets of different cross-function (cross-type and also multi-function cross-type) versions of summary characteristics that allow for the analysis of highly demanding modern spatial point process scenarios. We consider estimators of the theoretical tools and analyse their behaviour through a simulation study and two real data applications.
Keywords: cross-function mark correlation, forest monitoring data, mark variogram, mark weighted second order summary characteristics, nearest neighbour mark indices, urban economics
§ INTRODUCTION
Introducing general ideas from functional data analysis <cit.> into the field of spatial statistics, the statistical analysis of functional spatial data has attracted a lot of attention in recent years <cit.>. Potential applications from the literature include the analysis of regional penetration
resistance profiles <cit.>, air pollution monitoring data <cit.> and regional gross domestic product dynamics <cit.>. Different from more classical spatial statistics <cit.>, the objects of interest in any such data are the realised trajectories, i.e. curves, of some underlying continuous mechanism which are collected over some spatial domain 𝒳⊂R^d, usually d=2. As such, the functional observations themselves are assumed to be positively (resp. negatively) dependent over space relative to the distance between the spatial entities which needs to be accounted for in any statistical analysis. To this end, various approaches from classical spatial statistics were extended to functional outcomes yielding an ever-increasing methodological toolbox of different functional spatial data analysis techniques. Predetermined by the exact nature of 𝒳, these techniques help to investigate the spatial interrelations between the individual functional objects in geostatistical, areal or point process data contexts. However, despite a relatively large body of contributions on the analysis of geostatistical functional data and corresponding functional kriging approaches <cit.>, where a functional outcome is collected over a set of fixed locations,
and a growing number of contributions to functional areal data <cit.>, the analysis of spatial random point configurations, where the point attributes themselves are functions rather than scalar-valued quantities, is just in its infancy.
Despite the notable progress in spatial point process methodology with extensions to more challenging non-Euclidean domains for the points including the sphere, linear networks and graphs with Euclidean edges, extensions to more complicated non-scalar marks have not been covered much so far.
While the mark covariance <cit.>, mark correlation <cit.>, mark weighted K <cit.>, mark variogram <cit.>, and mark differentiation <cit.> functions, or corresponding nearest-neighbour versions <cit.>, have become prominent tools for the characterisation of real-valued marks, extensions to function-valued quantities still remain limited. In contrast to (marked) spatio-temporal point processes <cit.> where different clustered point process <cit.>,
Gibbsian processes <cit.>, log-Gaussian <cit.> and shot-noise <cit.> Cox model specifications, or corresponding second-order summary characteristics <cit.> used to characterise the temporal evolution for a set of (marked) point locations, contributions to function-valued marks remain elusive. In particular, advances to sets of distinct function-valued attributes, i.e. multivariate curves, do not exist.
Originating in the pioneering paper of <cit.> and subsequent works by <cit.> which introduced and extended the mark correlation function for function-valued point attributes, <cit.> were the first to provide a mathematically rigorous treatment on the subject. Instead of the set {(𝐱_i, m(𝐱_i))}^n_i=1
with points 𝐱_i∈𝒳 and scalar-valued marks m(𝐱_i) on some suitable mark space M, these authors considered the set {(𝐱_i, (f(𝐱_i), l(𝐱_i)))}^n_i=1 where each point 𝐱_i is augmented by a function-valued quantity f(𝐱_i)∈F and, potentially, an additional Euclidean auxiliary mark l(𝐱_i) living on a suitable latent mark space L. As such, apart from the trivial case when no auxiliary mark is available, this formulation allows for (i) function-valued marked multivariate point patterns, where different types of points with one function-valued point attribute are observed, and (ii) function-valued marked point patterns where at each point additional real-valued information is available. While providing a flexible treatment of more complex point process scenarios which include unmarked (resp. scalar-valued marked) point processes as special case when the (f(𝐱_i),l(𝐱_i)) (resp. f(𝐱_i)) argument is ignored, extensions to points with multiple distinct function-valued marks have not been covered to the very best of our knowledge. Considering at least two distinct function-valued point attributes for each point location, this paper aims to fill this gap. In particular, sets of different cross-function summary characteristics for points with two distinct function-valued marks are introduced and extended to cross-type and also multi-function cross-type versions that allow for the analysis of highly demanding modern spatial point process scenarios. All data and R code to reproduce the proposed auto- and cross-function mark characteristics is made publicly available in a github repository <https://github.com/carlescomas/SppFDA>.
The remainder of the paper is structured as follows. After a general introduction to spatial point processes with multivariate function-valued point attributes, Section <ref> establishes different cross-function mark characteristics and potential extension to multitype point processes. In particular, extensions of classical test functions to the function-valued mark setting are discussed in Section <ref>. Estimators of the proposed mark characteristics are presented in Section <ref>. The proposed characteristics are evaluated through a simulation study in Section <ref>. Section <ref> presents an application of the proposed tools to two different data sources originating from forestry and urban economic contexts. The paper concludes with a discussion in Section <ref>.
§ SPATIAL POINT PROCESSES WITH MULTIVARIATE FUNCTION-VALUED MARKS
To extend the theory and methodology of function-valued marked spatial point processes to multivariate function-valued point attributes, let 𝒳 denote a subset of R^2 equipped with Borel sets ℬ(𝒳), and d(·) an Euclidean metric on 𝒳. On 𝒳, define Ψ_G={𝐱_i }^n_i=1 as ground, i.e. unmarked, spatial point process with intensity measure Λ_G. As such, Ψ_G is well embedded into the theory of spatial point processes and a rich body of different tools can directly be applied to investigate the structural properties of the points <cit.>. Associated with Ψ_G, denote by Ψ={𝐱_i, 𝐟(𝐱_i) }^n_i=1 a marked spatial point process on 𝒳×F^p with locations 𝐱_i∈𝒳 and p-variate associated function-valued marks 𝐟(𝐱_i)=(f_1(𝐱_i), …, f_p(𝐱_i)) on F^p where each f_h(𝐱_i): 𝒯⊆R↦R, h=1,…,p with 𝒯=(a,b), -∞≤ a≤ b≤∞. In general, F^p is assumed to be a Polish, i.e. complete separable metric, space equipped with σ-algebra ℱ^p=⊗_h=1^p ℱ_h <cit.>. For Ψ, the expected number of points N_h(·) in B∈ℬ(𝒳) with function-valued attribute in F_h∈ℱ_h corresponds to the intensity measure Λ_h(B× F_h) which simplifies to
[N_h(B× F_h)]=Λ_h(B× F_h)=∫_B× F_hλ_G(𝐱)d𝐱P(dF_h)
for fixed F_h in ℱ_h with λ_G being the intensity function of Ψ_G and P(dF_h) a reference measure on (F^p,ℱ^p). For stationary Ψ, i.e. if Ψ=Ψ_x with Ψ_x={ (𝐱_i+x, 𝐟(𝐱_i))}^n_i=1 for any translation x, and fixed f_h the intensity measure Λ_h(B× F_h) equals λ_hν(B) with λ_h denoting the intensity of Ψ with respect to F_h and ν(·) the Lebesgue measure, i.e. the volume, of its argument. Similarly, Ψ is called isotropic if Ψ=𝔯Ψ with 𝔯Ψ={ (𝔯𝐱_i, 𝐟(𝐱_i))}^n_i=1 for any rotation 𝔯.
To account for additional integer-valued marks, i.e. when different types of points are available, Ψ can be generalised to a m-variate (i.e. multitype) spatial point process Ψ with n=n_1+…+n_m points and multivariate function-valued point attributes on 𝒳^m×F^p with corresponding component processes Ψ_d, d=1,…,m and associated ground process Ψ_G. We note that the above point processes could also be extended by additional real-valued mark information, e.g. through additional auxiliary mark terms l(𝐱_i)∈R, which allows for the formulation of doubly-marked (multitype) point processes where each point is augmented by multivariate function-valued and one (resp. w distinct) real-valued marks living on 𝒳^m×F^p×R (resp. 𝒳^m×F^p×R^w).
§.§ Cross-function second-order mark summary characteristics and nearest neighbour indices
Apart from the first-order properties, a variety of second-order mark summary characteristics and their related nearest-neighbour versions have become useful methodological tools for the analysis of classical (real-valued) marked spatial point process scenarios which help to investigate the heterogeneity and interrelation between the observed point attributes, and decide on the independent mark hypothesis as a function of the distance between pairs of two points. To extend the methodological toolbox to the function-valued marks setting and define suitable cross-function characteristics, let f_h(𝐱) and f_l(𝐱') denote two distinct function-valued marks for a pair of distinct point locations in Ψ with interpoint distance d(𝐱,𝐱')=r, r>0. Adopting the core principles from classical mark characteristics and applying a pointwise evaluation first, different cross-function mark characteristics can be defined by introducing a test function _f <cit.>, i.e. a map _f: F×F→R^+, which itself takes the marks f_h and f_l for a pair of distinct points in Ψ as arguments. In what follows, we assume Ψ to be second-order stationary and isotropic such that the characteristics solely depend on the distance r, r>0 and it suffices to consider the marks at the origin ∘ and the distance 𝐫 where d(∘, 𝐫)=r. Depending on the precise specification of _f, different mark characteristics can be constructed by taking the expectation _∘,𝐫 of _f under the condition that Ψ has indeed points at locations ∘ and 𝐫. Generalising the most prominent test functions from the literature to the function-valued marks setting, potential specifications of _f include
_1=1/2(f_h(∘)(t)- f_l(𝐫)(t))^2 and _2(f_h(∘)(t), f_l(𝐫)(t))=(min(f_h(∘)(t), f_l(𝐫)(t)))/(max(f_h(∘)(t), f_l(𝐫)(t))), which are based on the difference or the ratio between the pair of distinct marks, and
_3=f_h(∘)(t)· f_l(𝐫)(t), _4=f_h(∘)(t) and _5=f_l(𝐫)(t), which are based on the product of the arguments <cit.>. Obviously, the above formulations include auto-mark characteristics as special cases for h=l. We note that apart from a concurrent setting, alternative pointwise test functions may be defined for the marks f_h(t) and f_l(s) with s<t.
§.§.§ Cross-function variation and differentiation characteristics
As a first cross-function mark characteristic, the pointwise cross-function mark variogram γ_hl(r,t) which helps to investigate the strength and range of the variation in the mark differences with respect to the distance 𝐫 can be derived by taking the conditional expectation _∘,𝐫 of _1. This pointwise characteristic can then be turned into a global cross-function mark variogram γ_hl(r) which corresponds to the L_2 metric by integrating γ_hl(r,t) over 𝒯,
γ_hl(r)=∫_𝒯_∘,r[_1(f_h(∘)(t),f_l(𝐫)(t))] t ∫_𝒯γ_hl(r,t) t.
While the limit of this characteristic equals the non-spatial variance and the normalised version yields a straight line that is constantly one under the independent mark assumption, large values of this cross-function characteristic will indicate a strong heterogeneity between the function-valued attributes f_h and f_l at a distance r.
Different from the cross-function mark variogram, taking _3 as argument of _∘,𝐫 yields a pointwise cross-function version of Stoyan's mark covariance function <cit.> defined by ^Sto_hl(r,t)=_∘,𝐫[_3]-μ_h(t)·μ_l(t) with corresponding global characteristic ^Sto_hl(r)=∫^Sto_hl(r,t) t where μ_h(t)=[f_h(t)] and μ_l(t)=[f_l(t)] are the non-spatial means of f_h(t) and f_l(t), respectively. Alternatively, a cross-function mark covariance can also be obtained by rewriting Cressie's <cit.> covariance function ^Cre_hl as ^Cre_hl(r,t)=_∘,𝐫[_3]-_∘,𝐫[_4]_∘,𝐫[_5] where ^Cre_hl(r)=∫^Cre_hl(r,t) t.
Inserting the ratio of _2 instead of the difference between the paired marks into the conditional expectation _∘,𝐫[·] yields a pointwise cross-function version of the mark differentiation function <cit.> τ_hl(r,t) for function-valued marks defined by τ_hl(r,t)=1-_∘,𝐫[_2]
with global characteristic τ_hl(r)=∫τ_hl(r,t) t. Obviously, values of τ_hl(r,t) equal or close to zero imply that the function-valued point attributes at the distance r are equal or almost identical while increasing non-zero values indicate an increase in heterogeneity of the marks.
§.§.§ Cross-function correlation characteristics
Different from the difference and ratio based characteristics, an alternative set of cross-function mark characteristics can be defined through the product of the function-valued marks. Taking _3 as argument of _∘,r yields a pointwise cross-function version of the conditional mean product of marks c_hl(r,t) within a distance r at t∈𝒯. We note that c_hl(r,t) translates again into a global cross-function characteristic c_hl(r) by integration of the pointwise one over 𝒯,
c_hl(r)=∫_𝒯_∘,𝐫[f_h(∘)(t)· f_l(𝐫)(t)] t ∫_𝒯 c_hl(r,t) t.
Further, normalising c_hl(r,t) by μ_h(t)·μ_l(t), i.e. the product of non-spatial means, yields a cross-function pointwise version of Stoyan's mark cross-correlation function κ_hl(r,t) <cit.> from which the global characteristic κ_hl(r) follows analogous to (<ref>) by integration of κ_hl(r,t) over 𝒯. We note that Stoyan's mark covariance function is indeed a linear transformation of the mark correlation function such that ^Sto_hl(r) and κ_hl(r) are essentially the same <cit.>.
Apart from Stoyan's mark correlation function, <cit.> and <cit.> introduced two alternative mark correlation functions that could also be extended to the function-valued mark setting.
<cit.> proposed a simpler version of the above formulation of the mark correlation function in which the product of the mark values is replaced by the normalised sum of marks. Using this formulation allows for a straightforward extension to a pointwise version for function-valued marks f_h(∘)(t) and f_l(𝐫)(t) defined by
κ^Bei_hl(r,t)=f_h(∘)(t)+f_l(𝐫)(t)/μ_h(t)+μ_l(t)
with κ^Bei_hl(r)=∫κ^Bei_hl(r,t) t. As the nominator approaches the product (resp. the sum) of means μ_h(t) and μ_l(t) as limits, both κ_kl and κ_kl^Bei are constantly equal to one for all t∈𝒯 in case of independent marks and positive or negative mark correlations could easily be identified by positive or negative deviations from one, respectively. Opposite to the above formulations, <cit.> introduced a different type of mark correlation function which is closely related to Pearson's correlation. Using the cross-function version of Cressie's mark covariance function, a pointwise cross-function analogue to Isham's mark correlation function can be defined as
^Ish_hl(r,t)=^Cre_hl(r,t)/√(_hh(r,t))·√(_ll(r,t))
where _hh(r,t)=_∘,𝐫[(f_h(∘)(t)· f_h(𝐫)(t)]-_∘,𝐫[f_h(∘)(t)]_∘,𝐫[f_h(𝐫)(t)] and _ll(r,t) defined analogous, and ^Ish_hl(r)=∫^Ish_hl(r,t) t.
Apart from the extended cross-function mark correlation characteristics outlined above, taking _4 or _5 as arguments of _∘,𝐫 leads to pointwise r-mark functions c_h∙(r,t) and c_∙ l(r,t), respectively, where c_h∙(r,t)=c_∙ l(r,t). As before, both pointwise r-mark functions translate into global characteristics c_h∙(r) and c_∙ l(r) by integration of c_h∙(r,t) and c_∙ l(r,t) over 𝒯, respectively. Further, normalisation of c_h∙(r,t) and c_∙ l(r,t) by μ_h(t) and μ_l(t) yields the pointwise r-mark correlation functions κ_h∙(r,t) and κ_∙ l(r,t), respectively, where κ_h∙(r)=∫κ_h∙(r,t) t and κ_∙ l(r)=∫κ_∙ l(r,t) t.
We note that the cross-function mark correlation and 𝐫-correlation functions can also be used to define a counterpart version of the U(r) function for function-valued marks, this U(r) being the mean product of marks sited at distance r apart,
U(r)=∫λ^2 g(r) κ_hl(r) a a',
where λ≡λ_G is the intensity of the points, g(r) the pair correlation function, and a and a' are two infinitesimal small areas containing points 𝐱 and 𝐱' which are separated by a distance r <cit.>. Including second-order summary characteristics for both the points and the function-valued marks, this characteristics accounts jointly for spatial variation of the point locations and the marks. Under the independent marks assumptions, κ_hl(r)=1 whereas g(r)=1 under the complete spatial randomness hypothesis, i.e. the homogeneous Poisson point process case. Alternative formulation of U(r) can be achieved by substituting κ_hl(r) by the 𝐫-correlation functions κ_h∙(r) and κ_∙ l(r), the mark variogram γ_hl(r) and the mark differentiation function τ_hl(r), or alternatively by rewriting U(r) into polar coordinates allows for also for anisotopic behaviour.
§.§.§ Cross-function nearest-neighbour indices and k-nearest neighbour characteristics
While the second-order cross-characteristics provide functional summary characteristics of the pairwise interrelations between the function-valued point attributes against the distance r, nearest-neighbour indices are essentially numerical mark summary characteristics which help to quantify the local variation between the marks for a pair of nearest-neighbouring points. Similar to the previous sections, different cross-function nearest-neighbour characteristics can be constructed by taking the conditional expectation of particular test functions which, in turn, only consider the function-valued marks f_h(∘)(t) and f_l()(t) at the origin ∘ and its nearest neighbouring point <cit.>.
Rewriting the test function _3 into a nearest-neighbour version _3^nn=f_h(∘)(t)· f_l()(t) and taking the conditional expectation _∘,[_3^nn] leads to a pointwise cross-function nearest-neighbour mark product index c_hl^nn(t) from which a corresponding pointwise nearest-neighbour mark product correlation index κ_hl^nn(t) derives directly by normalising c_hl^nn(t) by the product of means μ_h(t)·μ_l(t). Likewise, taking the conditional expectation of _4^nn= f_l()(t) yields a pointwise nearest-neighbour mark index c_∙,l^nn(t) which transforms into the pointwise nearest-neighbour mark correlation index by normalising c_∙,l^nn(t) by μ_l. Similarly, pointwise cross-function nearest neighbour mark variogram and mark differentiation indices γ_hl^nn(t) and τ_hl^nn(t) can be constructed by taking the conditional expectation _∘, of the test functions _1^nn=1/2(f_h(∘)(t)- f_l()(t))^2 and _2^nn=min_nn /max_nn where min_nn= min(f_h(∘)(t),f_l()(t)) and max_nn=max(f_h(∘)(t),f_l()(t)), respectively. We note that all pointwise indexes translate into global numerical summary characteristics by integration of the pointwise version over 𝒯.
Apart from considering only the function-valued mark of the nearest neighbouring point location , the nearest neighbour indices can also be used to compute cumulative cross-function k-nearest neighbour summary characteristics from the marks f_h and f_l at the origin ∘ and its k-th nearest neighbouring point 𝐳_v(∘) with v=1,…,k. Substituting by 𝐳_v(∘)), a corresponding cumulative mark correlation index can be computed from the pointwise cross-function k-th nearest neighbouring index 𝒦_k(t),
𝒦_k(t) = .1/kE_∘, 𝐳_v(∑_v=1^k f_h(∘)(t)· f_l(𝐳_v(∘))(t))/μ_h(t)μ_l(t),
as 𝒦_k=∫𝒦_k(t) t. Likewise, a cumulative mark variogram index can be defined as Γ_k=Γ_k(t) t with
Γ_k(t)=1/kE_∘, 𝐳_v(∑_v=1^k 1/2(f_h(∘)(t)-f_l(𝐳_v(∘))(t))^2).
In addition, a pointwise counterpart version of Hui's mark dominance index D_k <cit.> for function-valued marks can be defined as
D_k(t)=1/kE_∘, 𝐳_v(∑_v=1^k1(f_h(∘)(t)>f_l(𝐳_v(∘))(t)))
which translates into a global characteristics by computing D_h=∫ D_k(t) t.
§.§ Cross-function mark-weighted summary characteristics
A different useful set of cross-function cumulative summary characteristics can be defined by adjusting classical functional point process summary characteristics for the function-valued marks by introducing a test function as weight into the specific functional point process summary expression. Although the principal idea also applies for the empty space and nearest neighbour contact distribution functions and related quantities, we explicitly only cover extension of second-order summary characteristics to the function-valued mark scenario including the mark-weighted pair correlation, K and L functions.
§.§.§ Mark-weighted characteristics for unitype point processes with multivariate function-valued marks
To define a suitable pair correlation function for function-valued marks f_h and f_l at locations 𝐱 and 𝐱' in Ψ, let α__f^(2)(t) denote the pointwise cross-function second-order factorial moment measure with density ϱ^(2)__f(t), i.e. the pointwise cross-function second-order product density functions. For _f=_3, α_hl^(2) becomes
α^(2)_hl(B_1× B_2)(t)=∑^≠_(𝐱, f_h(𝐱)),
(𝐱', f_l(𝐱')) ∈Ψ[1_B_1(𝐱)1_B_2(𝐱')f_h(𝐱)(t)· f_l(𝐱')(t)]
with density ϱ^(2)_hl(t). Then, a pointwise mark-weighted pair correlation function g_hl(r,t) can be defined as g_hl(r,t)=ϱ^(2)_hl(r,t)/(λ^2μ_h(t)μ_l(t)) and g_hl(r)=∫ g_hl(r,t) t. Specifying _f=_5 instead, the pair correlation function equals g_∙ l(r,t)=ϱ^(2)_∙ l(r,t)/(λ^2μ_l(t)) where ϱ^(2)_∙ l is the density of α^(2)_∙ l(t).
Similarly, setting _f=_3 a cross-function pointwise extension of the mark-weighted K function <cit.> can be defined as
λμ_h(t)μ_l(t) K_hl(r,t)=_∘[∑_(𝐱,f_l(𝐱))∈Ψ f_h(∘)(t)· f_l(𝐱)(t)1_b(∘,r){𝐱}],
where b(∘,𝐫) is a ball of radius 𝐫 centred at the origin, λ is the intensity of the points and μ_h(t) and μ_l(t) are the non-spatial means of f_h(t) and f_l(t), respectively. Likewise, for _4 and _5, the mark weighted pointwise K function changes to λμ_h(t) K_h∙(r,t)=_∘[∑_(𝐱,f_l(𝐱))∈Ψ f_h(∘)(t)1_b(∘,r){𝐱}] and λμ_l(t) K_∙ l(r,t)=_∘[∑_(𝐱,f_l(𝐱))∈Ψ f_l(𝐱)(t)1_b(∘,r){𝐱}], respectively <cit.>.
Analogous to the classical scalar-valued case, the mark-weighted L function L__f(r,t) is preferable to use in practice instead of the mark-weighted K__f(r,t) function, where
L__f(r,t)=√(K__f(r,t)/π). As for the pair correlation function, both K__f(r,t) and L__f(r,t) translate into global cross-function characteristics by integration of the pointwise versions over 𝒯.
Finally, applying the principal idea of the above mark-weighted second-order summary characteristics to local summary characteristics, a localised version of e.g. the pointwise cross-function mark-weighted K function for the u-th point location of Ψ can be defined for _f=_3 as
λμ_h(t)μ_l(t) K_u(r,t)= [∑_(𝐱',f_l(𝐱'))∈Ψ f_h(𝐱_u)(t)· f_l(𝐱'(t)) 1_b(𝐱_𝐮, 𝐫){𝐱'}].
§.§.§ Extensions to multitype points with multivariate function-valued marks
In what follows, let Ψ_i={ (𝐱_i,𝐟(𝐱_i)}^n_i_i=1 and Ψ_j={ (𝐱_j,𝐟(𝐱_j)}^n_j_j=1 denote two component processes of Ψ where i≠ j and 𝐟(𝐱_i) and 𝐟(𝐱_j) are p-variate function-valued marks on F^p. For _f=_3 the pointwise cross-function second-order factorial moment measure can be rewritten into a cross-function cross-type measure α^(2)_ij,hl(t),
α^(2)_ij,hl(B_1× B_2)(t)=∑^≠_(𝐱_i, f_h(𝐱_i))∈Ψ_i,
(𝐱_j, f_l(𝐱_j)) ∈Ψ_j[1_B_1(𝐱_i)1_B_2(𝐱_j)f_h(𝐱_i)(t)· f_l(𝐱_j)(t)],
with density ϱ^(2)_ij,hl(B_1× B_2)(t). This product density, in turn, allows to define a pointwise cross-function cross-type pair correlation function
g_ij,hl(r,t)=ϱ_ij,hl(t)/(λ_iλ_jμ_h(t)μ_l(t)) with g_ij,hl(r)=∫ g_ij,hl(r,t) t. Further, a
pointwise mark-weighted cross-function cross-type K function can be defined as
λ_iμ_h(t)μ_l(t) K_ij, hl(r,t)=_∘,i[_f(f_h(∘)(t), f_l(𝐫(t))1(b(∘), r){𝐱_j}]
from which a pointwise cross-function dot-type version can be obtained through λ K_i∙, hl=∑_j μ_h(t)μ_l(t)λ_j K_ij, hl(r).
§.§ Multiple point and function-valued marks
Finally, while the previous sections were restricted to the derivation of mark summary characteristics defined through a set of different test functions with at most two distinct function-valued point attributes, we now discuss potential extensions to marked point process scenarios with p≥ 3 distinct function-valued marks. For simplicity, we restrict to the trivariate case which could be extended naturally to sets of p with p≥ 3 distinct function-marks.
For points with function-valued marks 𝐟= (f_d, f_h, f_l), three different generalised test functions can be defined through unconditional and also conditional formulations. As a general first approach, function f_d(t) could be related to the set { f_h,f_l} by specifying (f_d(∘)(t), { f_h,f_l}(𝐫)(t)). Using this formulation and writing μ_hl(𝐫)(t) for the mean of functions h and l at t∈𝒯 and distance 𝐫, trivariate versions of _1 and _3 can be expressed as _1(f_d(∘)(t), { f_h,f_l}(𝐫)(t))=1/2(f_d(∘)(t)-μ_hl)(𝐫)(t))^2
and _3(f_d(∘)(t), { f_h,f_l}(𝐫)(t))=(f_d(∘)(t)·μ_hl)(𝐫)(t)). Instead of μ_hl, alternative formulations might be defined through the sum of pairwise operations, e.g.
_1(f_d(∘)(t), { f_h,f_l}(𝐫)(t))=1/2((f_d(∘)(t)-f_h)(𝐫)(t))^2+(f_d(∘)(t)-f_h)(𝐫)(t))^2).
Instead of only the h-th and l-th function-valued marks, a second general approach could be defined by relating the d-th function to all three functions, i.e. (f_d(∘)(t), 𝐟(𝐫)(t)). Similar to the above formulation, suitable specifications might be defined through μ_dhl(𝐫)(t), the mean of set 𝐟 at t∈𝒯 and distance 𝐫, or alternatively using a pairwise formulation which, in turn, would combine both auto- and cross-type terms.
While both of the above versions are specified through unconditional formulations, an alternative approach might derive from the conditional function-valued marks f_d|f_l and f_h|f_l yielding (f_d|f_l(∘)(t), f_h|f_l(𝐫)(t)). Both conditional marks could be derived by partialising out the effect of f_l from f_d and f_h using e.g. standard functional regression methods. Although interesting, this conditional formulation will not be pursued in this paper to make it more concise and focused.
§ ESTIMATION OF CROSS-FUNCTION SUMMARY CHARACTERISTICS
After having discussed extensions of various mark characteristics for (multitype) spatial point processes with multiple function-valued marks, their estimation from observed spatial point patterns is presented next. As before, the empirical cross-function estimators for unitype point patterns are first described. To this end, let ψ denote a spatial point pattern of n points observed in a bounded observation window W, where each point is augmented by a multivariate function-valued point attribute, and denote by ψ the corresponding multitype point pattern with multivariate function-valued marks and components ψ_1,…,ψ_d. Further, let (·) denote the cardinality, i.e. the number of points, in the argument.
§.§ Estimation of cross-function summary characteristics
Using the results of Section <ref> and writing c_(r) to denote the conditional expectation for any specific test function _f, both variation and product related cross-function mark summary characteristics can be derived through a generic function κ_(r) whose specific form itself depends on the specification of _f. Estimating the second-order _f-product density ϱ^(2)_(r), and its analogue version ϱ^(2)(r) of the ground pattern ψ_G, by
ϱ^(2)_(r)=1/2π r ν(W)∑_(𝐱,f_h),
(𝐱',f_l) ∈ ψ^ℓ(_f(f_h(𝐱),f_l(𝐱')))_b(‖𝐱-𝐱'‖-r) e(𝐱,‖𝐱-𝐱'‖ ),
and
ϱ^(2)(r)=1/2π r ν(W)∑_𝐱,𝐱'∈ψ_G^_b(‖𝐱-𝐱'‖-r) e(𝐱,‖𝐱-𝐱'‖),
respectively, where
ℓ(_f(f_h(𝐱),f_l(𝐱')))=∫_a^b_f(f_h(𝐱)(t),f_l(𝐱')
(t)) t
and _b(·) is kernel function with bandwidth b, ν(·) the area of its argument, and e(·) is an edge correction factor, then κ_(r) can be estimated by
κ_(r)=.ϱ^(2)_(r)/ϱ^(2)(r)/ c_, for r > 0,
where c_=∑_𝐱∑_𝐱'ℓ(_f(f_h(𝐱),f_l(𝐱'))/n^2 is an estimator of c_=c_(∞).
We note that κ_(r) can alternatively also be estimated by
κ_(r)=1/2π r ν(W)∑_(𝐱,f_h),
(𝐱',f_l) ∈ ψ^ℓ((f_h(𝐱),f_l(𝐱')))_b(‖𝐱-𝐱'‖-r) e(𝐱,‖𝐱-𝐱'‖)/λ^2 g(r)c_
where g(r)=ϱ^(2)(r)/λ^2, r≥ 0, and λ=(W)/ν(W) are estimators for the pair correlation function and the intensity of the ground process, respectively.
Specifying _f by _1 and _3 in the above formulation of κ_, the cross-function mark variogram and mark correlation function can be estimated by
γ_hl(r)=1/2π r ν(W)∑_(𝐱,f_h),
(𝐱',f_l) ∈ ψ^ℓ(_1(f_h(𝐱),f_l(𝐱)'))_b(𝐱-𝐱'-r) e(𝐱,‖𝐱-𝐱'‖ )/λ^2 g(r)c_.
and
κ_hl(r)=1/2π r ν(W)∑_(𝐱,f_h),
(𝐱',f_l) ∈ ψ^ℓ(_3(f_h(𝐱),f_l(𝐱)'))_b(𝐱-𝐱'-r) e(𝐱,‖𝐱-𝐱'‖ )/λ^2 g(r)μ_hμ_l,
respectively, where μ_l is the empirical functional mean of mark f_l. Similarly, estimators for κ_h∙(r) (resp. κ_∙ l(r)) can be obtained by setting _f to _4 (resp. _5) and substituting μ_h (resp. μ_l) for μ_hμ_l in (<ref>).
§.§ Estimation of cross-function nearest-neighbour indices
Estimators for the cross-function nearest-neighbour indices can be derived analogous to Section <ref> by replacing the above test functions by the nearest neighbour counterpart versions. Using the nearest neighbour test functions _1^nn and _3^nn, the nearest-neighbour mark variogram and mark product correlation index can be estimated through
γ^nn_hl=1/n∑^n_i=1ℓ(_1^nn(f_h(𝐱_i),f_l(𝐳(i))))/c_t
and
κ^nn_hl=1/n∑^n_i=1ℓ(_3^nn(f_h(𝐱_i),f_l(𝐳(i))))/μ_hμ_h,
respectively.
§.§ Estimation of cross-function nearest-neighbour indices mark-weighted characteristics
Estimators of the cross-function mark weighted K function can be obtained by normalising the function k_hl,
k_(r)=∑^≠_𝐱,𝐱'∈ Wℓ(_f(f_h(𝐱)(t),f_l(𝐱')(t)))1{‖𝐱-𝐱'‖≤ r}/ν(W)
by the empirical versions of the intensity λ^2 and a suitable normalising factor c_ corresponding to the specific test function used with c_=μ_hμ_l for _f=_3. The normalised estimator can then be transformed in the corresponding cross-function mark weighted L function by taking the square root of K(r). Likewise, the cross-function mark weighted pair correlation function can be computed as g_(r)=ϱ^(2)_(r)/λ^2c_ which becomes
g_hl(r)=ϱ^(2)_hl(r)/λ^2μ_hμ_l for choosing _3 as test function.
For the multitype point pattern scenario with components ψ_i and ψ_j and function-valued marks f_h and f_l, the cross-function cross-type mark weighted K function can be estimated by dividing
k_ij,(r)=∑^≠_𝐱_i,𝐱_j∈ Wℓ(_f(f_h(𝐱_i)(t),f_l(𝐱_j)(t)))1{‖𝐱_i-𝐱_j‖≤ r}/ν(W)
by λ_iλ_j c_ with
λ_i denoting the intensity of the i-th component of ψ. Again, this function translates into the corresponding estimator of the L function through the square root of K. Similarly to the unitype case, a cross-function cross-type mark weighted pair correlation function can be calculated by computing g_ij,t(r)=ϱ^(2)_ij,(r)/λ_iλ_jc_.
§ A SIMULATION STUDY
We conducted a simulation study to investigate how our estimators of the cross-function summary characteristics behave not only under several point configurations (including random, cluster and regular structures) but, in particular, also under different mark scenarios including spatially independence and function-valued marks, and positive or negative inter-dependencies between functions of type h and type l. For the case when we have positive interaction between functions, functions of type l grow or decrease when interacting with functions of type h, and vice versa, whilst for negative inter-dependencies, functions of type l grow or decrease when interacting with functions of type h, but functions of type h decrease or grow when interacting functions of type l.
§.§ Generating point patterns with function-valued marks
To control for the effect of the inherent point configuration on the proposed estimators, we considered three distinct point process configurations including Poisson, cluster and regular point process scenarios. Each of these three cases were generated on the unit torus to avoid edge effects with an expected number of points n=200. To obtain a clustered point process structure, we simulated a Thomas process <cit.> with offspring dispersion parameter σ=0.04, parent intensity λ_p = 40 and μ=5 expected offsprings per parent yielding an average number of points of around 200 in the unit square with a moderate clustered configuration. The regular point process scenario was constructed using a Strauss process <cit.> with interaction effect parameter q=0.05, and a radius of interaction R_int=0.025 which ensures strong inhibition effects for short scales of interaction, with an average number of points of around 200.
To generate spatial point patterns in which each point location is augmented by a set of function-valued quantities, we consider the continuous space–time stochastic process developed by <cit.>.
In this model, marked points located on the unit torus grow
and interact with each other in terms of a suitable growth-interaction scheme.
We adapt this approach to avoid point mortality and point immigration. In this way,
we keep the same point pattern over time and their associated growth curves. Technically, the Renshaw and Särkkä algorithm generates a spatial point pattern with function-valued marks h and l through
[ f_h(𝐱)(t+ t)=
f_h(𝐱)(t)+β_h f_h(𝐱)(t)(1-f_h(𝐱)(t)/S_h) t
+ ∑^≠_(𝐱, f_h(𝐱)),
(𝐱', f_l(𝐱')) ∈ψJ_h(f_h(𝐱)(t),f_l(𝐱')(t);‖𝐱-𝐱'‖) t ]
and
[ f_l(𝐱)(t+ t)=
f_l(𝐱)(t)+β_l (1-f_l(𝐱)(t)/S_l) t
+ ∑^≠_(𝐱, f_l(𝐱)),
(𝐱', f_l(𝐱')) ∈ψJ_l(f_l(𝐱)(t),f_h(𝐱')(t);‖𝐱-𝐱'‖) t ]
where f_h(𝐱)(t) and f_l(𝐱)(t) are two functions of point 𝐱 at time t, ψ is a realisation of Ψ, β_h the intrinsic rate of growth, S_h the non-spatial carrying capacity, 𝐱_1-𝐱_2 the Euclidean distance between a pair of points, and
J_h(·) a suitable interaction function between points. Note that functions of type h and l grow in terms of the classic logistic growth and the immigration-death process, respectively. These two simple growth functions ensure that both functions remain bounded.
To generate positive correlation between functions, we use a Strauss like symmetric interaction function <cit.> adapted to the case where the interaction is between the marks f_h and f_l,
J(f_h(𝐱)(t),f_l(𝐱')(t);‖𝐱-𝐱'‖)=
{[ c ; 0 , ].
where c∈ℝ is a constant interaction effect. Here, points start to interact with each other with
constant value c as soon as their distance is less than D. To ensure a symmetric interacting structure, J_h(·)=J_l(·)=J(·), as smaller function values affect the growth of larger ones
in the same way as larger function values affects smaller ones. To avoid interacting effects to decrease function values, we set c>0; c<0 implies function reduction and eventually negative function values.
Moreover, to generate negative correlation between functions,
we take J_h(·)=J(·) and J_l(·)=0. Now functions of type h take an advantage when interacting with functions of type l (faster grow), whilst functions of type l are not affected by the interaction with functions of type h. This promotes an asymmetric function interaction, resulting in negative spatial correlation between growth functions of distinct type.
To generate spatial point patterns with function-valued marks, we consider expressions (<ref>) and (<ref>) with growth carrying capacity S_h=S_l=5, intrinsic rates of growth β_h=0.05, β_l=0.2 and interaction distance D=0.05. These scenario parameters are chosen as they give rise to functions that are convenient as illustrative examples. Moreover, to obtain the desired marked point patterns with spatially independent and/ or positive correlation between function-valued marks, we consider the interaction mechanism (<ref>) for J_h(·)=J_l(·)=J(·), with interaction parameter c=0 and c=0.5, respectively. Whilst to generate negative spatial correlation between functions we assume the same interaction function (<ref>), but for J_h(·)=J(·) and J_l(·)=0 with c=0.5.
Figure <ref> shows the results for the homogeneous Poisson point process scenario with intensity λ=200. The red lines are the empirical cross-function mark summary characteristics from a single simulation. The grey shading shows the fifth-largest and smallest envelope values based on 199 random simulations according to the null hypothesis of random labeling of functions over fixed point locations. Here, we consider three correlation function scenarios, namely, spatial independence between functions (left), positive (middle) and negative (right) correlation between functions. This highlights that in absence of interaction between functions, the resulting estimators of both the cross-function mark variogram (<ref>) (top panels) and the cross-function mark correlation (<ref>) (bottom panels) lie within the grey shading area, confirming the spatial independence between functions of type h and l. In direct contrast, when assuming spatial positive or negative interaction between curves (central and right panels, respectively), these estimators lie outside this grey shading area, confirming the presence of inter-function dependencies. In particular, under positive correlation of the function-valued marks, the empirical
cross-function mark variogram lies outside this grey shading area with values smaller than the smallest envelope values, for small r values. This suggests that the positive interacting function-valued marks have less variability than under the independent mark setting. Similarly, under negative correlation between functions, estimators of both the cross-function mark variogram and the mark correlation lie outside the grey shading area with values larger than the largest envelope values, for small r values, suggesting negative interactions between functions.
Similar results can be found for the Thomas (Figure <ref>) and the Strauss process scenarios (Figure <ref>). In absence of inter-function dependencies both estimators (cross-function mark variogram and mark correlation) lie within the grey shading area, whilst for positive or negative correlation effects between functions these functions lie outside these envelopes. This confirms that the new cross-function mark summary characteristics can detect spatial dependencies between functions of distinct type independently of the spatial structure of the underlying point pattern.
§ APPLICATIONS
§.§ Application to Swiss tree data
As a first data application, we consider tree measurements recorded at an annual basis over 14 years that originates from a long-term irrigation experiment located in Pfynwald, the central part of the Pfyn-Finges national park in Switzerland <cit.>. Initiated in 2003, the experiment aimed to investigate the effect of increased water availability on the individual trees and the ecosystem in a naturally dry Scots pine (Pinus sylvestris L.) forest. The study region covers an area of 1.2 ha and is located in one of the driest inner-Alpine valleys of the European Alps <cit.>. The data at hand was provided as open data under an Open Database License and has been made available publicly at <https://opendata.swiss>. It covers the tree-specific spatial coordinates, the initial assignment into the treatment or control group and different tree characteristics for 900 trees. Form this source, we initially selected the annual total crown defoliation (TCD) from the provided list of tree characteristics and also the exact point locations of the individual tree stands. The TCD parameter is a commonly used parameter in forest monitoring studies to quantify the loss of needles or leaves of a given tree relative to a local reference tree. Within the application, we considered the retrieved TCD information as function-valued tree attribute and assigned it as a mark to the tree locations in a subsequent action. Restricting the data to complete cases, we excluded any trees with incomplete or missing TCD information from the data yielding a final sample of 799 trees with annual TCD records over all 14 years. In a next step, we computed the local pairwise correlation function for all trees of the reduced sample which describes the contribution of the individual point to the empirical pair correlation function, i.e. its pair correlation function based local indicator of spatial association. The local information was then used as a second function-valued mark in our application such that each tree was marked by two distinct function-valued quantities. The resulting point pattern with both function-valued marks and classic second-order summary characteristics of the points are shown in Figure <ref>. While not considered here, we note that the data also allows for cross-function cross-type versions as outlined on Section <ref> by taking additionally the tree-specific assignment into treatment or control group into account. Such advanced mark characteristics might help to investigate the complex interplay of the TCD and local pair correlation function curves with the effect of additional water supply.
As expected by the large number of trees, the sampled point pattern reflects some clear structure and a tendency of clustering among the points. This impression is supported by the pair correlation function (left bottom panel) and also Ripley's K function (right bottom panel) which show a clear positive shift of the empirical curves from the theoretical lines under the complete spatial randomness hypothesis which indicates a clear tendency of clustering.
Next, to evaluate the findings of the proposed auto- and cross-function summary characteristics with the classic summary characteristics for scalar-valued marks commonly used at present, we transformed the function-valued marks into function-wise averages and computed the mark variogram and Stoyan's mark correlation function from the averaged quantities (see Figure <ref>). The empirical versions of both mark characteristics show a clear deviation from the theoretical envelopes for the average TCD (top panel). While the mark variogram (left top panel) suggests that the mean TCD values exhibit less pairwise variation as expected under the independent mark hypothesis, we found a clear positive shift of the empirical pairwise product of TCD averages as considered by the mark correlation function (right top panel) from the theoretical envelopes. In comparison with the TCD, both empirical mark characteristics show almost no deviations from the independent mark hypothesis in case of the averaged local pair correlation function (bottom panels). Except only some negative shift of the mark variogram (left) at small distances, both estimated characteristics are covered by the envelopes.
Different from the classic mark characteristics, all auto- and cross-function mark variograms and correlation functions of Figure <ref>, except the auto-function mark correlation of the local pair correlation (central right panel), show significant results. As already indicated by the classic characteristics, the top panel corresponding to the TCD curves reflects again a negative deviation of the empirical auto-function mark variogram (left top panel) contrasted with a clear positive shift of the empirical auto-function mark correlation function (right top panel) from the theoretical lines under the independent mark hypothesis. This indicates that the observed TCD curves show less spatial variation among pairs of neighbouring points. At the same time, the product of the TCD curves clearly exceeds the expected case, i.e. the non-spatial functional mean squared. For the central panels showing the auto-function characteristics computed from the local pair correlation functions, the auto-function mark variogram (left) again suggests smaller variation between the function-valued marks compared to the independent mark setting for some small distances. Finally, looking at the cross-function characteristics of the TCD and local pair correlation curves, both results show a clear variation from the independent mark envelopes. This would imply that the pairwise spatial variation of both functions is smaller than under the limiting case where the cross-function variogram is equal to the covariance, whereas the pairwise product of the two marks exceeds the limiting case in which the pairwise product of the two marks approaches the product of the functional means μ_h and μ_l.
§.§ Application to Spanish labour data
As second example of a spatial point pattern with bivariate function-valued marks we considered data on the total number of companies as of January, 1st and the number of residents recorded annually at municipality level for the period from 2012 to 2022. The data originated from the official data reports released by the National Statistics Institute of Spain (INE) and was made publicly available at <www.ine.es>. The business information was derived from the official business register of INE and corresponds to the total number of local companies over different economic sectors. The local assignment of the companies to exactly one municipality was performed by INE in a pre-processing step using the registered business address information to avoid potential inconsistencies in case of regionally wide spreading business locations, e.g. factories or business facilities of one company in several distinct municipalities. From the provided data we initially selected a sample of 87 municipalities that fall into the boundaries of Albacete, a Spanish province on La Mancha (the Spanish Plateau). The area of La Mancha is located southeast of Madrid and is characterised by a homogeneous climate and population density, and as such, has been treated as a particular instance of a homogeneous spatial point process in the literature <cit.>. From the selected file we excluded the municipalities of Masegoso, Montalvos and Villa de Ves in a subsequent action for which no economic information was available. This yielded a final sample of 84 Spanish municipalities to which we applied the following pre-processing. In a first step, we derived the exact spatial location of the centroids for each municipality in the sample and assigned the corresponding pair of coordinates to the data. Next, we generated two function-valued attributes from the provided local business and population statistics by computing the pointwise yearly differences between the values from 2012 to 2021 and the reference records of 2022. As such, both generated marks express the annual change in the size (resp. number) of the local business sector (resp. population) with respect to 2022. All information was then transformed into a spatial point process with function-valued marks in a final step. The generated point pattern and classic second-order point process summary characteristics of the points are shown in Figure <ref>.
Different from the Swiss tree data example, the Spanish point pattern appears to be less dense and clustered with both constructed business (<ref>, upper left panel) and population (<ref>, upper right panel) showing some heterogeneity. Both the empirical pair correlation function (bottom left panel) and Ripley's K function minus rπ^2 indicate a clear tendency to clustering for the point locations, which supports the visual impression.
As for Section <ref>, we computed the means from the business and population curves and used the scalar information as input for classic mark summary characteristics (see Figure <ref>). Due to the presence of negative mark values, the unnormalised version, i.e. the conditional mean product of marks c_mm(r), was computed instead of the mark correlation function k_mm(r).
For the averaged business variation (top panels) and the mean variation of the population (bottom panels) both the mark variogram and mark correlation function are completely covered by the envelopes, supporting the independent mark hypothesis.
Comparing these these findings with the results of the auto- and cross-function mark characteristics depicted in Figure <ref>, the independence hypothesis is not supported by the auto-function mark variogram for population (central left panel) and the cross-function mark variogram of business and population (bottom left). Both functions would instead suggest less pairwise variation between the generated populations differences resp. business and population differences as expected under the independent mark assumption. As both mark summary characteristics for the change in size of the business sectors (top panels) are included in the envelopes, the significant results of the cross-function mark variogram seem to be driven mostly be the pairwise variation of the population curves.
§ CONCLUSIONS
This paper proposes an immense variety of different mark summary characteristics which allow to decide on potential structure of the marks within highly challenging spatial point process scenarios. Including cross-function, multi-function and corresponding mark weighted versions of well-established mark summary characteristics, the extended methods are providing a suitable statistical toolbox for the analysis of spatially aligned function-valued quantities for a plethora of potential applications. Formalised through generalisations of classical test functions to the (multivariate) function-valued mark scenario, the proposed characteristics are well embedded into the statistical literature and methodology for spatial point patterns with real-valued marks and allow for similar interpretations.
The considered estimators are natural extensions to the complex function-valued cases of those proposed in <cit.> and are technically supported by the theoretical treatments in <cit.>. We rely on this latter contribution to support the behaviour of the proposed estimators. However, there are a number of doors open in both, theoretical and inferential aspects, that starting from our developments can go further in providing tools for complex mark structures. One such example we can think of is the case when we have trajectories restricted to a network-based topology. Here more topological arguments are needed to be considered when developing further tools and their estimators.
§ ACKNOWLEDGEMENTS
The authors gratefully acknowledge financial support through the German Research Association and the
Spanish Ministry of Science. Matthias Eckardt was funded by the Walter Benjamin grant 467634837 from the German Research Foundation. Jorge Mateu was partially funded by grant PID2019-107392RB-I00, from the Spanish Ministry of Science. Carles Comas was partially funded by grant PID2020-115442RB-I00 from MCIN/AEI/ 10.13039/501100011033 (Spanish Ministry of Science)
ecta
|
http://arxiv.org/abs/2307.03958v1 | 20230708114851 | Secrets Revealed in Container Images: An Internet-wide Study on Occurrence and Impact | [
"Markus Dahlmanns",
"Constantin Sander",
"Robin Decker",
"Klaus Wehrle"
] | cs.CR | [
"cs.CR",
"cs.NI"
] |
An Internet-wide Study on Secrets in Container Images]Secrets Revealed in Container Images:
An Internet-wide Study on Occurrence and Impact
Markus Dahlmanns, Constantin Sander, Robin Decker, Klaus Wehrle
Communication and Distributed Systems, RWTH Aachen University Aachen Germany
{dahlmanns, sander, decker, wehrle}@comsys.rwth-aachen.de
[email protected]
RWTH Aachen University
Germany
[email protected]
RWTH Aachen University
Germany
[email protected]
RWTH Aachen University
Germany
[email protected]
RWTH Aachen University
Germany
Containerization allows bundling applications and their dependencies into a single image.
The containerization framework Docker eases the use of this concept and enables sharing images publicly, gaining high momentum.
However, it can lead to users creating and sharing images that include private keys or API secrets—either by mistake or out of negligence.
This leakage impairs the creator's security and that of everyone using the image.
Yet, the extent of this practice and how to counteract it remains unclear.
In this paper, we analyze numnonemptyimages images from Docker Hub and privatemeasurementnumtotalmax other private registries unveiling that pctaffectedimages of images indeed include secrets.
Specifically, we find validprivatekeyvalidnumdistinctmatchestotal private keys and apinumdistinctmatches leaked API secrets, both opening a large attack surface, i.e., putting authentication and confidentiality of privacy-sensitive data at stake and even allow active attacks.
We further document that those leaked keys are used in the wild:
While we discovered casignedcerts certificates relying on compromised keys being issued by public certificate authorities, based on further active Internet measurements, we find 20220901numuniquehosts TLS and SSH hosts using leaked private keys for authentication.
To counteract this issue, we discuss how our methodology can be used to prevent secret leakage and reuse.
<ccs2012>
<concept>
<concept_id>10002978.10003014</concept_id>
<concept_desc>Security and privacy Network security</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002978.10002979.10002980</concept_id>
<concept_desc>Security and privacy Key management</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Security and privacy Network security
[500]Security and privacy Key management
[
Klaus Wehrle
August 12, 2023
===================
§ INTRODUCTION
While originally developed to isolate applications <cit.>, containerization has become a new cornerstone of interconnected services as it significantly eases their deployment <cit.>.
To this end, Docker, the most prominent containerization framework <cit.>, uses prebuilt images that include all software dependencies necessary to deploy an application <cit.>.
Users only need to download an image from a registry or can derive their own image by adapting its configuration and included files.
These new images can then again be uploaded building a whole ecosystem of containerized applications.
For example, Docker Hub, the official Docker registry, comprises more than 9000000 images <cit.> anybody can use.
With this level of public exposure, any mistake during image creation can have drastic consequences.
Most notably, including confidential secrets such as cryptographic keys or API secrets, by mistake or out of negligence, can introduce two security issues:
[(i)]
* attackers can misuse compromised secrets leading to potential loss of data, money, privacy, or control, and
* administrators instantiating images can rely on broken security, e.g., paving the way for Man-in-the-Middle attacks.
Aggravatingly, there is no easy tooling to show which files have been added—accidentally adding a secret is thus much easier than identifying such an incident.
Indeed, related work traced three reused private keys authenticating 6000 (Industrial) Internet of Things services back to the occurrence in a Docker image <cit.>.
Additionally, blog entries produced anecdotal evidence that Docker images include further confidential security material <cit.>.
However, comprehensive analyses on revealed security secrets at scale do not exist in this realm.
Instead, such analyses focus on GitHub repositories <cit.>.
Hence, the extent for container images is unknown.
In this paper, we thus comprehensively study whether Docker images include confidential security material and whether administrators reuse these compromised secrets at large scale by
[(i)]
* scanning publicly available Docker images for confidential security material, and
* measure whether these secrets are used in practice on production deployments.
To this end, we analyze images available on the official and largest registry Docker Hub as well as examine the entire IPv4 address space for public registries and services relying their security on compromised secrets.
Contributions Our main contributions are as follows.
* We found privatemeasurementnumtotalmax Docker registries in the IPv4 address space that contain not only secrets but also potentially confidential software and likely allow attackers to replace images, e.g., with malware.
* After filtering test secrets, we identified totalvalidmatches leaked distinct secrets, i.e., validprivatekeyvalidnumdistinctmatchestotal private keys and apinumdistinctmatches API secrets, in numaffectedimages images (pctaffectedimages of images we scanned are affected).
* We show that operators use 20220901corrFingerprint compromised private keys in practice affecting the authenticity of 20220901numuniquehosts Internet-reachable hosts providing, i.a., HTTP, AMQP, MQTT, and LDAP services.
* We discuss improvements of the Docker paradigm to prevent secret leakage and reuse in the future as well as provide our software used to find and verify secrets <cit.> to support mitigation.
§ A PRIMER ON THE DOCKER PARADIGM
In contrast to other containerization frameworks, Docker <cit.> does not only provide an isolated execution environment for applications.
Instead, Docker specifies an easy-to-use paradigm to create, share and deploy ready-to-run container images <cit.>.
These images constitute the filesystems of the containers and include all dependencies necessary for the actual applications, i.e., they can include all kinds of files added during creation.
The completeness of these images allows to share them via (publicly accessible) registries.
Figure <ref> shows the structure and lifecycle of Docker images in detail, from creating images to sharing and running them.
Image Creation
To create an image, Docker uses a user-defined Dockerfile <cit.> to specify the image ingredients.
First 1, the Dockerfile references another image, the base image, which is downloaded from a registry and comprises the initial file system of the new image.
Second 2, image layers consisting of differential snapshots of the file system after running commands from the Dockerfile are created and stacked on each other <cit.>.
These commands can include shell statements to, e.g., compile an application running in the container.
Furthermore, specific commands exist to embed environment variables or to add files from the host system into the image <cit.>.
While the files can be, e.g., source code or further dependencies, image creators can also easily and accidentally include (cryptographic) secrets into the image or its environment variables, putting the service's security at risk when leaked.
Once an image has been fully created, it exists as a self-containing unit, which is ready-to-run but also allows little insight on what has been added.
Image Push
After generating the image, creators can push it to a registry <cit.>, e.g., the official and largest registry Docker Hub <cit.>, allowing to deploy containers among an own fleet of servers easily, but also to share it with other users <cit.>.
To this end, the image layers are uploaded to the registry under a repository name and tag 3.
Thereby, the repository name typically represents the application in the image, and the tag describes a version.
Conventionally, creators tag the newest image in a repository with .
Container Deployment
To run a Docker container, users pull an image from a registry.
When pulling, users first request an image manifest <cit.> from the registry, including meta information about the image and its layers.
After downloading all layers 4, Docker merges the content composing the file system for the new container 5 <cit.>.
The application then finds an unchanged file system with all content provided by the image creator, i.e., all dependencies but also potentially added secrets, and can very likely provide services to the public Internet.
Since numerous containers of various users can base on a single image, included, and thus compromised, secrets could affect several deployments.
The Docker paradigm eases distribution and deployment of applications.
However, insight into what is added in images and up- or downloaded from a registry can be lost.
Thus, secrets can be leaked and reused, impairing Internet-reachable services at scale.
§ RELATED WORK
Three streams of research motivate our analysis of confidential security material in Docker images: studies that detect leaked security material, research on publicly available Docker images, and Internet-wide scans disclosing security weaknesses at scale.
Actively Leaked Security Material
Currently, the search for leaked security material focuses on code repositories.
Several studies detected the leakage of passwords <cit.>, SSH private keys <cit.>, Amazon Cloud API keys <cit.>, and Slack API keys <cit.>, using the built-in search of GitHub.
To allow broader searches, researchers entailed regular expressions but focused on specific file types <cit.> or code snippets <cit.>, i.e., the scale of this research was limited.
In contrast, Meli et al. performed a large scale study without focusing on specific file types, showing that ∼3.5 of the 4 analyzed code repositories on GitHub included leaked secrets <cit.>.
Further approaches use machine learning to improve the detection by relying on code semantics <cit.>, false-positive detection <cit.>, or both requiring further user input <cit.>.
Away from GitHub, research proposed methods to investigate various platforms <cit.> and proved the presence of secrets in publicly available Android apps <cit.>.
A recent study underlines that most developers experienced secret leakage, and guidelines are insufficient for prevention <cit.>.
While retroactively deleting leaked secrets does not help <cit.>, (non)-commercial approaches, e.g., GitGuardian <cit.>, TruffleHog <cit.>, or Gitrob <cit.>, aim at preventing secret leakage for Git.
Docker Images
Besides Git, researchers and developers, early on without evidence, assumed leaked secrets in images for virtual machines or Docker and provided countermeasures <cit.>.
Nevertheless, non-academic Web-blog studies <cit.> still find leaked secrets in images on Docker Hub.
However, these studies either limit their scale <cit.> to a few thousand images/secrets or restrict their methodology <cit.> to process large amounts of available images.
The latter study <cit.> finds 46076 affected images among 6.3 images on Docker Hub, but only considers information available in Dockerfiles, e.g., specific file paths.
Meanwhile, SecretScanner <cit.>, a smaller secret search tool, implements a function allowing users to find secrets in Docker images.
Still, a comprehensible, large-scale, and methodology-driven analysis on introduced security weaknesses by leaked security material is missing.
Instead, large-scale studies on Docker images focused on data compression <cit.>, software vulnerabilities <cit.>, or typosquatting of image names <cit.>.
Hence, as of now, it is unclear how widespread secret leakage is in images on Docker Hub as well as private Internet-reachable registries.
Moreover, it is unknown to what extent these compromised images are then used on the Internet and whether they weaken security at scale.
Internet Measurements
For understanding deployment security at scale, Internet-wide measurements have been a valuable tool in the past.
Internet scan services, such as Shodan <cit.> or Censys <cit.>, fetch and publish meta-information, e.g., security configurations, on Internet-reachable services.
Although these services often helped researchers analyzing the security of connected devices, e.g., cars <cit.> or (insecure) Industrial IoT (IIoT) deployments <cit.>, they usually do not see all deployments <cit.>.
Hence, researchers frequently conduct own active Internet measurement, e.g., using ZMap <cit.>.
On the web, these measurements allowed to analyze the deployment of new TLS versions <cit.> and revealed wide security configuration mistakes <cit.> or implementation deficits <cit.>.
Aside the web, researchers assessed the security of SSH services <cit.> and key-value stores leaking confidential data <cit.>.
For the IoT and IIoT, research revealed many deployments relying on vulnerable software <cit.> and communicating without any security mechanism <cit.>, e.g., access control.
Even with built-in security features, operators often configure such services insecurely <cit.>.
For example, a massive reuse of certificates was traced back to a Docker image including certificates and corresponding private keys <cit.> jeopardizing the authenticity of numerous deployments.
Based on this, we claim that it is probable that there are further public Docker images that wrongly include confidential secrets and harm security on the Internet—especially when looking at the sheer size of Docker and Docker Hub.
Although the broad leakage of security secrets in code repositories is well understood, the spread of revealed secrets in Docker images and the introduced security risk for the Internet are unknown.
However, known secret leakage detection techniques and Internet measurements are predestined to shed light on these issues.
§ COMPOSING OUR DATASET
To answer whether Docker image creators actively compromise security secrets by publishing them in openly available Docker images, we set out and retrieve images from Docker Hub (Section <ref>) and publicly reachable private registries (Section <ref>).
§.§ Retrieving Images from Docker Hub
Table <ref> guides through our composition process on Docker Hub, which has three tasks:
[(i)]
* composing a list of repositories,
* selecting one image per repository to widely spread our analysis, and
* identifying layers the images consist of.
§.§.§ Repositories
While Docker Hub limits the number of image downloads <cit.> and we cannot download and analyze all 15 of images available on Docker Hub <cit.> due to runtime and bandwidth restrictions, our analysis requires a selection of repositories of interest.
Furthermore, Docker Hub does not support listing all available images to choose from.
Hence, we use specific search terms to get images users retrieve when searching via the Web interface.
Our search terms (which we elaborate in more detail in Appendix <ref>) build two query groups (Table <ref> (left));
Standard comprises mainstream communication protocol names <cit.> and frequently used technologies <cit.> for a wide analysis of images referencing current issues.
For comparison and more focusing on a specific area, we choose the Industrial Internet of Things (IIoT) as past studies showed a great susceptibility to security faults <cit.>, i.e., IIoT includes protocol names from this area.
We list the number of repositories covered by our analysis per query group, i.e., the sum of found repositories of all search terms of a group, in Table <ref> (column Repositories-#).
To further convey the prevalence of our search terms, we indicate the minimum, maximum, and 25-, 50-, and 75-percentiles of search results for included terms, i.e., higher values of lower percentiles would imply a higher prevalence.
While both query groups contain terms that lead to no results (min), i.e., the term is not mentioned in any repository name or description, terms in the standard group generate more results due to their closer correlation to frequently used technologies than IIoT protocols (p_25, p_50, p_75).
Docker Hub's API limits the number of results to 10000 (max).
As different search terms lead to overlapping repositories, we further report on the distinct number of repositories gradually, i.e., per query group, and overall.
In total, we gathered distinctnumrepooverall distinct repositories subject to our study of which standarddistinctpctrepopergrouponly are uniquely added by our standard search terms and iiotdistinctpctrepopergrouponly by IIoT related search queries.
§.§.§ Images
Table <ref> (column Images-#) indicates how many images were available in total over the distinct repositories of a search group.
While repositories mostly contain different images, including the same software in other versions and thereby comprising similar files, we choose to analyze one tag per repository to spread our analysis as widely as possible.
Here, we select images tagged with which is used as Docker's default and typically includes the newest version of an image.
However, not all repositories contain images tagged with (as shown in Table <ref> (column Images-).
Here, we select the image with the latest changes (as reported by Docker Hub's API).
Empty repositories (Table <ref> (column Images-none)), i.e., have no image layers available, cannot include any secrets.
Besides the number of images that are covered by our study (column Images-analyzed), we also report on the age of the images to analyze how long they are already available on Docker Hub.
The ages of images included in both query groups roughly have the same distribution indicating that although the number of images found by our IIoT-related queries is lower image creators update their images in the same frequency as image creators of images included in our Standard group.
§.§.§ Layers
While we report on the number of layers included in all images (Table <ref> column Layers-#), different images often share the same layers, e.g., layers from frequently used base images.
Hence, to speed up our search for leaked secrets, we analyze each distinct layer only once.
We show the distinct number of layers gradually, i.e., per query group, and overall.
To cover all distinctnumrepooverall repositories, we analyze distinctnumlayersoverall layers.
(standarddistinctpctlayersgroup uniquely added by Standard-related, iiotdistinctpctlayersgroup by IIoT-related repositories).
§.§ Images from Private Docker Registries
Since image creators might upload sensitive images preferably to private registries, we want to include images from these registries in our analysis.
Table <ref> shows our steps taken to extend our dataset with images from private registries, i.e., we search private registries, and, subsequently, include a subset of available layers.
§.§.§ Find Private Registries and Repositories
To find publicly reachable Docker registries, we scan the complete IPv4 address space for services running on the standard port for Docker registries, i.e., TCP port 5000, under comprehensive ethical measures (cf. Appendix <ref>) twice to analyze short-term fluctuations (Table <ref> (left)).
Both times, we perform a TCP SYN scan using <cit.>, identifying hosts running a service behind this port and subsequently send an HTTP request as defined by Docker's Registry API <cit.> for verification.
Whenever we do not receive a valid HTTP response, we retry via HTTPS.
While we found up to privatemeasurementnumtotalmax private registries on privatemeasurementdatemax, the difference in found registries in comparison to our scan on privatemeasurementdatemin is due to registries in Amazon AWS-related ASes that do not reply after our first scan anymore.
Since these registries only contain the same and single image (uhttpd), they might relate to another research project, e.g., implementing a registry honeypot.
Contrarily to Docker Hub's API, the API of private registries allows listing available repositories without search terms.
However, we limit our requests to receive a maximum of 100 repositories per registry to prevent any overloads.
As such, the found private registries provide privatemeasurement220801repositorysum resp. privatemeasurement220806repositorysum repositories.
Since the registries do not implement access control for read access, clients are able to download all included images.
Notably, by default also write access is not restricted <cit.>, i.e., attackers might be able to inject malware.
privatemeasurement0repositoryuhttpd
privatemeasurement2repositorynginx
privatemeasurement4repositoryredis
While being publicly available on private registries but not filtered by any search terms, the content of these images is of special interest.
Here, often the repository name indicates the image's content and thus allows conclusions on widely distributed applications, i.e., over both measurements, is the most reoccurring repository name (reoccurring privatemeasurement0sum times, but only during our first scan).
Repository names on the second and third place, i.e., and , indicate proxy and cloud services where image creators might have included security secrets before uploading it to their registry.
Beyond the scope of security secrets, other repository names occurring less often, e.g., or , imply that image creators might include confidential software, source code, private data, or information on systems especially worthy of protection in openly available Docker images.
§.§.§ Image and Layer Selection
For all found repositories, we collect the lists of available images and their tags (Table <ref> (center)).
Although private registries typically do not implement any rate limiting like Docker Hub, we do not want to overload found registries or their Internet connections.
Hence, to spread our analysis as far as possible but limit the load on each registry, we choose one tag per image.
Similar to our selection process on Docker Hub, typically, in each repository, we select images tagged as to download the corresponding manifest.
Whenever no image is available, we sort all available images naturally by their tag (to account for version numbers as tags), and select the maximum (i.e., the newest version), as the API does not provide any information on the latest changes.
Subsequently, we download the corresponding image manifests to retrieve accompanying layers.
To further limit load on Internet connections of found registries, we do not download all available layers for included secrets.
Instead, we randomly select layers of chosen images such that the sum of their sizes does not exceed 250 per registry and per measurement.
All in all, we added privatenumdistinctlayersselected layers from private registries to our dataset.
In parallel to Docker Hub numerous private registries exist providing images to the public.
Overall, we assemble a dataset of numconsideredlayersoverall layers from numnonemptyimages images subject to our future research.
Furthermore, private registries might allow attackers to, e.g., inject malware, potentially infecting container deployments at scale as well.
§ LEAKED SECRETS IN DOCKER IMAGES
Next, we search in considered images for included secrets (Section <ref>), discuss the origin of affected images to later evaluate remedies (Section <ref>), and analyze also found certificates compromised due to private key leakage to estimate arising risks (Section <ref>).
§.§ Searching for Secrets
To analyze available images for included secrets, we align our approach to established methods <cit.>, i.e., we choose and extend regular expressions identifying specific secrets and match these on files and environment variables.
Additionally, we extensively filter our matches to exclude false positives.
§.§.§ Regular Expression Selection
We base our selection of regular expressions on previous work to find secrets in code repositories <cit.> (we further elaborate on our election process and expressions in Appendix <ref>).
Table <ref> (left) names the domains of secrets that our selected expressions match and indicates how attackers could misuse these secrets.
We start with regular expressions composed by Meli et al. <cit.> due to their selection of unambiguous expressions (reducing false positives) matching secrets with a high threat when leaked.
We extend their expressions for private keys to match a larger variety, e.g., also OpenSSH private keys.
Moreover, we widen the set by expressions matching API secrets of trending technologies <cit.> based on match rules from TruffleHog <cit.>.
However, TruffleHog's rules are relatively ambiguous and incur many false positives, which TruffleHog filters by validating the API secrets against their respective endpoints.
As our ethical considerations do not allow for any further use of the secrets (cf. Appendix <ref>), we focus on rules which expect at least one fixed character and later add further filtering and verification steps.
§.§.§ Matching Potential Secrets
To analyze whether image layers include secrets, we match the selected regular expressions on the images as follows (we will open-source our tool on acceptance of this paper):
We download and decompress the image layers and then match our regular expressions on the included files.
Moreover, we recursively extract archive files up to a depth of 3 and match again.
As API documentations often suggest setting secrets in environment variables and not writing them into files, we analyze set variables.
Since Docker allows downloading the small image configuration containing set variables aside of the image, i.e., potential attackers do not have to download and search through all files to find included secrets, we analyze variables separately:
As such, we only download the image configuration file and iterate our regular expression over set environment variables.
Here, we adapt the API expressions, as some expect a specific term before the secret (cf. Table <ref> in Appendix <ref>), e.g., the service name as part of a variable name.
As the variable names and values are separated in the configuration file, we also split the according expressions and match them individually.
Table <ref> (center) lists for each secret domain how many matches and how many distinct matches we found in both, image content and environment variables.
Notably, while only covering two services, i.e., Facebook and Twitter, the expressions in the Social Media domain matched most often over all domains, which already indicates that API secrets of this domain are often suspect to leakage.
The high redundancy of the matches, visible as the significant decrement between distinct and non-distinct matches, already hints at invalid matches, e.g., private keys or example API tokens prevalent in unit tests or documentation in several layers.
Indeed, the most reoccurring match (mostreoccurringnumocc times in mostreoccurringnumlayer different layers), is an example key for mostreoccurringrule from a library documentation which creators usually include in their images.
We thus validate our matches extensively.
§.§.§ Match Validation
To exclude test keys for cryptographic libraries, example API secrets, and completely invalid matches to get a near lower bound of harmful leaked secrets in Docker images, we use different filters depending on the secret type.
While we show the number of resulting valid secrets in Table <ref> (right), Figure <ref> details the filtering results separated by the match's origin, i.e., image content or environment variable and domain.
Private Keys
Our regular expressions for private keys match on PEM or XML formatted keys.
Thus, we can first exclude every match that is not parsable (filter Unparsable).
Figure <ref> shows that only a minority of all potential private keys in image layers are unparsable, underlining that image creators include and compromise private keys actually usable in final Docker containers for practical operations.
Contrarily, the single match within the environment variables is only a key fragment and thus not parsable.
Still, we expect a high number of software test keys in Docker images among found keys, as they are part of several libraries creators might include in their images, e.g., OpenSSL.
Since users will most likely not use such keys to secure their deployments, we filter out test keys that are included in kompromat <cit.>, a repository listing already compromised secrets (filter Kompromat).
More specifically, we filter keys occurring in RFCs (kompromatfoundrfcnumdistinct), libraries for software tests (kompromatfoundsoftwaretestsnumdistinct), or as special test vectors (kompromatfoundtestvectorsnumdistinct).
To also account for software test keys that are not available in kompromat, we analyze the file paths where respective keys were found (filter File).
While we do not generally exclude all paths containing signal words indicating test or example keys, as users might use such paths also for keys they generated and use in practice, we apply different measures.
For instance, based on locations of test keys identified using kompromat, we deliberately exclude matches in similar locations, i.e., keys within directories where we already detected test keys and all parent directories under which we find more than 2/3 test keys.
Last, we exclude file paths typically used by libraries (cf. Appendix <ref>), e.g., , as there is a lower chance that users adapt their keys here.
Figure <ref> shows that these filters process the largest share of excluded private key matches.
It further indicates that kompromat only includes a minority of software test keys, i.e., is not directly usable to exclude all false-positive matches.
Still, many of the found keys are not filtered and, thus, most likely, no software test keys.
In total, we found validprivatekeyvalidnumdistinctmatchestotal valid private keys potentially in use in practice (cf. Table <ref> (right)).
Since all of these keys are located in files, attackers would have to download respective image layers to get access and not only meta information to retrieve environment variables.
Still, since these keys are publicly available and thus compromised, usage in production puts authentication at stake, i.e., attackers can perform impersonation attacks.
API Secrets
Since our ethical considerations deter us from validating API secrets against their service endpoints (cf. Appendix <ref>) as applied by TruffleHog <cit.>, and related methods for false positive detection focus on matches in source code <cit.>, which is not prevalent in Docker images, we need alternative measures to filter invalid matches.
By manually supervising our filtering, we ensure that the final set only includes valid-looking API secrets.
Based on invalid matches in GitHub code repositories <cit.>, we expect human-created example keys that contain keywords, e.g., , or consecutive character sequences, e.g., , that we must exclude (filter Sequence).
To filter consecutive sequences, we search for segments consisting of ascending, descending (both with a length of four), and repeating characters (with a length of three).
Furthermore, we filter matches including sequences that occur unusually often, i.e., we create (frequencyngrammin, frequencyngrammax)-character-grams of all matches, exclude grams created over fixed parts of our regular expressions as well as grams only containing digits, and count the number of occurrences over all API matches.
To account for randomly reoccurring grams, we filter all matches that include grams occurring frequencyNgramsTimeFactor times more often than the average.
We manually ensured that our filter is not too restrictive but also not to loose leaving often reoccurring grams out.
Figure <ref> shows that this filtering excludes a large share of matches.
Interestingly, the most reoccurring gram is [sic!], which we could trace back to DNA sequences in images related to bioinformatics underpinning the large variety of different and unexpected file types occurring in Docker images.
Similar to filtering private key matches by their file paths, we also filter API matches occurring in manually selected paths (filter File, cf. Appendix <ref>).
Essentially, we revisited the location and file types of all matches and excluded paths that most likely do not include any valid secrets compromised by publishing these in Docker images.
Figure <ref> indicates that the filtered paths often also include matches filtered by our sequence filter and thus that libraries include strings similar to secrets, e.g., in their documentation.
Still, after manual revision of the remaining matches, we conclude that rules which match on a fixed term before the secret, e.g., the service name, and then allow a specific length of characters are too ambiguous for usage on files in Docker images as they match on arbitrary content, e.g., on hashes with the service name in front.
We thus decide to exclude matches of these rules from our further analysis (gray in Table <ref> (left)), i.e., consider these matches invalid, to ensure the integrity of our further results.
Still, a minority of these matches might be valid, potentially enabling attackers to compromise production services or access confidential data.
Comparing the filter results of API secret matches in files and environment variables, the share of valid matches in variables is significantly higher than in files indicating that image creators less likely include secret placeholders in variables.
Still, as Table <ref> (right) shows, most secrets are located within the images.
Thus, attackers have a higher chance of finding valid secrets when downloading both environment variables and image content.
In total, we found apinumdistinctmatches distinct API secrets in Docker images, mostly related to services from the cloud domain (validapicloudvalidnumdistinctmatchestotal secrets).
Although we cannot prove the functionality of these secrets, the occurrence of apicloud1numdistinctmatches secrets for the apicloud1rule or apicloud2numdistinctmatches secrets for the apicloud2rule indicate that attackers might be able to reconfigure cloud services maliciously, e.g., by editing DNS or VM options.
Additionally, we found evidence for secrets allowing attackers to access private data from social media (validapisocialmediavalidnumdistinctmatchestotal secrets), or even access financial services (validapifinancialvalidnumdistinctmatchestotal secrets, most matches: apifinancial0rule).
Notably, although we focused our image search partly on IoT terms, we found no valid secrets from selected IoT services.
§.§.§ Secrets Owned by Single Users
Based on findings over leaked secrets found on GitHub <cit.>, we expect most valid secrets to residing in images of single users (as users do not share their secrets intentionally).
Contrarily, invalid matches, e.g., library test keys, would mainly reside in images of multiple owners.
Thus, to check whether the matches we identified as valid secrets are located in images of single users, we analyze the number of different owners that include a specific secret in their images.
To this end, for images from Docker Hub, we consider the repository owner (embedded in the repository name) as the owner of a secret.
For private registries, we consider the registry's IP address as the owner (assuming that owners only run a single registry and neglecting that registries might use different (dynamic) IP addresses).
Figure <ref> shows that the largest share of valid secrets indeed occurs in images of single owners.
validmatchmultiuserprivatekeyFalsepct of private keys (validmatchmultiuserprivatekeyFalsenum keys) and validmatchmultiuserapiFalsepct of API secrets (validmatchmultiuserapiFalsenum secrets) reside in images of single owners underpinning that these should be protected.
Moreover, we can trace validmatchmultiuserlayer0privatekeyTruenum private keys and validmatchmultiuserlayer0apiTruenum API secrets of multiple owners back to inheritance.
These secrets were already included in the base image, but w.r.t. to the overall occurrence, we conclude that secret spread due to inheritance is no major problem.
To responsibly inform image creators about leaked secrets in their images, we reach out to them whenever possible (numemaildisclosure extractable and valid e-mail addresses) and also contacted the operator of Docker Hub (cf. Appendix <ref>).
Early on, we received notifications of creators that removed found secrets from their images.
totalvalidmatches found secrets show that image creators publish confidential information in their publicly available Docker images.
As attackers have access to these secrets relying authentication and other security mechanisms are futile, potentially leading to compromised servers or leaked privacy-sensitive data.
§.§ Origin of Leaked Secrets
Next, we analyze where the validated secrets stem from to see whether specific images are more affected and why.
To this end, we examine the distribution of affected images and compare between private registries and Docker Hub, as well as IIoT specific and Standard images.
Moreover, we evaluate which operation in the original Dockerfile led to the insertion of secrets and inspect the file paths where they reside to get an intuition for their usage.
§.§.§ Docker Hub Leads Before Private Registries
We already discovered that private registries include potentially sensitive images.
However, until now, it remains unclear whether images on these registries are more often subject to secret leakage than images from Docker Hub, e.g., due to creators believing that these are unavailable for the public.
Thus, we analyze whether leaked secrets occur more often in images from Docker Hub or from private registries.
While we found that numaffectedimages images (pctaffectedimages of images analyzed) contain valid secrets, pctaffectedimagesdockerhub of images from Docker Hub and pctaffectedimagesprivate of images from private registries are affected.
Thus, creators upload secrets to Docker Hub more often than to private registries indicating that private registry users may have a better security understanding, maybe due to a deeper technical understanding required for hosting a registry.
Yet, both categories are far from being leak-free.
For Docker Hub, besides the increased fraction of leaked secrets, we see an issue for others, i.e., other users can easily deploy containers based on these images.
Thus, there is a higher chance their containers rely their security on included and compromised secrets.
For example, a shared certificate private key could lead to an impersonation attack.
In case of shared API secrets, all deployed containers might use the same API token leading to exhausted rate limits in the best case, but maybe also to overwritten or insufficiently secured private data.
As a single API token does not allow fine-granular exclusions, i.e., it is either valid or revoked for all users, a revocation would also interfere with benign users.
Independent of their origin, attackers could equally misuse the secrets we found to leverage authentication or access privacy- or security-sensitive data.
As such, both user groups of Docker Hub and private registries leak sensitive information, be it through unawareness or a deceptive feeling of security.
§.§.§ Domains are Similarly Affected
For our image selection on Docker Hub, we specifically included search terms relating to the IIoT, as past research has shown significant security shortcomings in this area.
However, until now it is open whether found images of a certain domain are suspect to revealed secrets more frequently than other images.
To answer this question, we trace images that include secrets back to the query group that led to their inclusion.
We discovered that affectedstandardrepositorypct of the images only found using queries from the Standard query group and affectediiotrepositorypct of images only from the IIoT group include valid secrets[Images found by both query groups are not included.].
Thus, in case of secret leakage via Docker images and based on our selected search terms, the IIoT domain does not perform worse than our Standard domain.
However, it underpins that the problem of secret leakage in Docker images is a prominent issue for all domains.
§.§.§ Fresh Private Keys and Copied API Secrets
To find countermeasures against secret leakage in Docker images, it is important to understand how these leaked secrets became part of Docker images.
More specifically, for private keys, it is unclear whether creators execute commands in the Dockerfile to create fresh keys, which are then published in images, or whether they manually add them, i.e., using or in a Dockerfile.
Additionally, both, private keys and API secrets, could be indirectly included through other means, e.g., by cloning Git repositories or downloading further data.
Figure <ref> shows that while most API secrets are typically inserted by file operations (File), e.g., copied from the image creator's host system, private keys are predominantly included by executing a command within the Dockerfile (Exec.)[Secrets can be associated with both, File and Exec. operations, e.g., when first ed to the image and then copied or moved internally using or .].
Thus, private keys might be either downloaded or generated during the creation process.
To further trace the insertion of secrets in Exec. layers back to the responsible executed commands, we analyze these commands.
Since image creators often concatenate several bash commands whose output is then included in a single layer without any opportunity to associate files (and thus secrets) to a specific command, we count each of the commands related to the leakage of a secret.
We show the most prominent of all validmatchnumdistinctcommands commands associated with secret leakage in Figure <ref>.
In fact, privatekeyinstsshdpct of private keys were generated in layers where image creators installed the OpenSSH server.
Since the installation triggers to generate a fresh host key pair, it is automatically included in the image.
While the procedure of automatic key generation is beneficial on real hardware, i.e., users are not tempted to reuse keys on different hosts, in published Docker images it automatically leads to compromised keys and thus puts the authenticity of all containers relying on this image in danger.
Further privatekeysshkeygenpct of found private keys were generated by a direct call of , e.g., to generate fresh SSH client key material, implying the planned usage in production of generated but compromised key material.
Given the massive secret leakage on GitHub <cit.>, we also expect secrets to be included in images by cloning Git repositories.
However, only a minority of secrets can be associated with Git, suggesting that the sets of users leaking secrets via Docker and GitHub are distinct. Furthermore, only a minority of secrets were downloaded (using or ) both indicating that the secrets we found were most likely exclusively leaked in Docker images and underpinning that they are actually worth being protected.
§.§.§ File Paths Indicate Usage
To further reason about the usage of our found secrets, we analyze their file paths within the images assessing where secrets stem from and how services apply them.
Separated by private keys and API secrets, Figure <ref> shows the distribution of secrets throughout the directory structure of all images and focuses on the top seven paths.
We found the majority of private keys in underpinning a high prevalence of compromised SSH host keys.
Another large share occurs in suggesting compromised keys used for host authentication via TLS.
This path is also the location for TLS default (“snakeoil”) keys that are used if no other information is provided.
They are auto-generated when the package is installed such that every host possesses a unique default key-pair.
However, when installed during the creation of Docker images, the key is included in the image and, thus, compromised when shared.
Based on the key's filename, indeed, we found numsnakeoiletcssl of such keys which are potentially used to offer TLS services with broken authenticity to the public Internet.
Even more alarming, we found keys lying in , indicating that included keys are associated with a Public Key Infrastructure (PKI), and thus potentially destined to offer services to a higher number of users.
Furthermore, contains private keys used in relation to the IoT and, as per the repository names, for authentication using IoT protocols like CoAP and MQTT.
Thus, attackers possessing these private keys can leverage the authentication of all connections users establish to each container created based on these images.
In fact, attackers then can access or alter transmitted confidential information, e.g., privacy-sensitive user data or commands of IoT services potentially impacting cyber-physical systems.
In addition, we found keys in , i.e., a location where SSH client key pairs typically reside.
Hence, these keys might enable attackers to take over SSH servers, trusting these keys and having access to confidential data.
Contrarily, found API secrets are distributed more evenly through the directory structure.
We found the largest share in , which is the example folder for including own applications in Docker images <cit.>, underlining that image creators compromise their own application's API secrets.
While similar holds for , another large share of secrets resides in stemming from Firefox profiles containing Google Service API secrets in cached JavaScript files.
Although these secrets are most likely usable in combination with Google Maps or Google Analytics and thus meant to be shared with website visitors, this leakage implies privacy issues:
An attacker could retrace the creator's browsing history, which apparently exists due to the cache being filled, which could show potentially sensitive information.
In addition, we found a large share of Google API secrets (both Cloud and Services) in .
Since we do not use API tokens for further validation (cf. Appendix <ref>), we cannot be entirely sure whether these secrets are usable or only generated for testing purposes.
However, manual supervision of the matches and including files suggest that they could be actually in use.
pctaffectedimages of analyzed images contain and thus leak secrets.
While the majority stems from public Docker Hub images regardless of their domain, also private registries leak a significant number of secrets.
Notably, associated file paths and commands imply their production use and that various authentication mechanisms are futile.
§.§ Compromised Certificates
To further understand the severity of potentially compromised systems, we now focus on found certificates as they provide various information on their relations and use cases.
Thus, we research the trust chain, validity, and usage parameters of knowncompromizedcerts compromised certificates occurring in Docker images.
Trust Anchors
While self-signed certificates indicate the usage of certificates in controlled environments, i.e., clients need a safelist with all certificates they can trust, CA-signed certificates imply the usage at larger scale as these are trusted by all clients having a corresponding root certificate installed.
We consider certificates where the issuer and common name are similar as self-signed and CA-signed otherwise.
For CA-signed certificates, we consider those which we can validate against widespread root stores[Stores from Android, iOS/MacOS, Mozilla NSS, OpenJDK, Oracle JDK, and Windows.] as signed by a public CA, and otherwise signed by a private CA.
We discovered that the majority of found compromised certificates (selfsignedcertspct) are self-signed, but also privatecacerts private CA-signed and casignedcerts public CA-signed certificates.
While all systems relying on these certificates open the door for impersonation attacks, the occurrence of CA-signed certificates is especially alarming as such certificates are typically planned to provide authenticity to many clients/users and are universally accepted.
Thus, knowing these certificates' private key not only allows attackers to perform Man-in-the-Middle attacks but also enable them to sign malicious software to compromise other's systems.
Validity
As a countermeasure against key leakage, the certificate's lifetime enforces service operators to request new certificates from time to time, as clients should reject outdated certificates.
Notably, casignedvalidondownload public-CA, privatecavalidondownload private-CA, and selfsignedvalidondownload self-signed certificates were valid when we downloaded their containing image layer, showing that the authenticity of relying services is at stake, i.e., the lifetime does not help in these cases of key leakage.
Interestingly, casignedvalidonhistory public-CA, privatecavalidonhistory private-CA, and selfsignedvalidonhistory self-signed certificates were valid when added to their Docker image (as per the image's history timestamp).
While these larger numbers show that the limited lifetime of certificates helps to mitigate leaked private keys, they also indicate that key leakage in images is tedious, i.e., more and more private keys are leaked.
Usages
The usage attributes of certificates can optionally indicate the practical use-case of CA-signed certificates and, thus, further help to understand the severity of the private key leakage.
While all public-CA-signed certificates allow for authentication (digital signatures), and casignedparsedFindingextensionsextendedkeyusageserverauth are explicitly declared for server authentication, casignedparsedFindingextensionsextendedkeyusagecodesigning (private-CA: privatecaparsedFindingextensionsextendedkeyusagecodesigning) allow for code-signing.
Thus, knowing the private key of these certificates, does not only allow attackers to perform Man-in-the-Middle attacks, but also enable to sign malicious software to compromise others systems.
knowncompromizedcerts found compromised certificates show that leaked private keys can have extensive influence on the authenticity of services and software.
Thus, attackers can impersonate services, decrypt past communications, or sign malware to infect production systems.
§ SECRET USAGE IN THE WILD
Until now, it is open whether the found compromised secrets are used in practice and, if so, to what extent, i.e., whether a single compromised secret is reused due to several Docker containers stemming from the same image.
While we cannot check the validity of API secrets by using them against their destined endpoint due to our ethical guidelines (cf. Appendix <ref>), we can investigate whether hosts on the Internet use found private keys for authentication.
To assess whether Internet-reachable hosts can be suspect to impersonation attacks due to secret leakage in Docker images, we check for TLS- and SSH-enabled hosts relying their authentication on compromised private keys by using the Censys database, i.e., 15 months of active Internet-wide measurement results <cit.>.
Here, we search for hosts presenting a public key, i.e., as SSH host key or within a TLS certificate, matching to one of the found compromised keys.
More specifically, we match the fingerprint of public keys in the Censys database on ones extracted from found private keys.
In Figure <ref>, we detail how many hosts rely their authenticity on found compromised private keys and how often these keys are reused.
While the total number of hosts relying on compromised keys is worrying on its own (20220901numuniquehosts hosts in Oct. 2022), their protocols, even worse, imply sensitive services.
As such, in October 2022, we find MQTT20220901numuniquehosts MQTT and AMQP20220901numuniquehosts AMQP hosts, potentially transferring privacy-sensitive ((I)IoT) data.
Moreover, FTP20220901numuniquehosts FTP, PostgreSQL20220901numuniquehosts PostgreSQL, Elasticsearch20220901numuniquehosts Elasticsearch, and MySQL20220901numuniquehosts MySQL instances serve potentially confidential data.
Regarding Internet communications, we see SIP20220901numuniquehosts SIP hosts used for telephony as well as SMTP20220901numuniquehosts SMTP, POP320220901numuniquehosts POP3, and IMAP20220901numuniquehosts IMAP servers used for email.
Since these hosts are susceptible to impersonation attacks due to their leaked private keys, attackers can eavesdrop, relay, or alter the sensitive data transmitted here.
Aggravatingly, we also find services with administrative relevance:
SSH20220901numuniquehosts SSH servers rely on SSH20220901corrFingerprint compromised host keys and Kubernetes20220901numuniquehosts Kubernetes instances use leaked keys opening doors for attacks which can lead to remote-shell access, extension of botnets or further data access.
The comparably low number of compromised keys used (compared to knowncompromizedhostkeys found SSH host keys) is probably due to a missing need for SSH servers in Docker containers as other mechanisms, e.g., , already allow shell access.
Furthermore, we see LDAP20220901numuniquehosts LDAP instances relying on leaked secrets.
As LDAP is used as a base for user authentication on attached systems, the integrity of unknown many other clients is at stake.
For instance, attackers could grant themselves root access to a myriad of systems.
The number of actually used keys is low compared to the number of hosts which rely on them indicating that a few Docker images lead to various compromised container deployments.
Thus, the simplicity of Docker to deploy services based on ready-to-use images puts the authenticity of several instances most likely operated by different users under threat.
In this regard, HTTPS hosts stand out in particular.
HTTP20220901numuniquehosts HTTPS hosts use HTTP20220901corrFingerprint different compromised private keys showing that the reuse of these keys is rampant for Web services.
Thus, attackers can perform Man-in-the-Middle attacks to alter webpages on their delivery or data sent to the server.
Figure <ref> also underpins that the key usage of compromised keys is long-lasting and rising, i.e., over the complete available period the number of compromised systems grew from 20210501numuniquehosts (relying on 20210501corrFingerprint compromised keys) to 20220901numuniquehosts hosts (20220901corrFingerprint keys) indicating that container images with compromised certificates or SSH host keys included are increasingly used.
Thus, the authenticity of more and more systems is futile, offering an ever-growing attack surface.
While our study is significantly driven by initially found compromised keys in Docker images in the area of the IIoT, Censys does not identify secured IIoT protocols other than AMQP and MQTT via TLS.
Thus, we perform own Internet-wide measurements for a deeper inspection of whether IIoT services also use compromised certificates, e.g., for authentic communication via OPC UA.
To this end, we select ten secure IIoT protocols from recent literature <cit.> and mimic its proposed measurement strategy.
Our results show that besides the already large number of compromised AMQP and MQTT hosts, only 2 CoAP hosts use 2 different leaked keys from Docker containers.
That we do not find substantially more compromised hosts using other IIoT protocols underlines that the issue of key leakage is not an IIoT specfic hotspot but a general problem.
20220901numuniquehosts hosts use 20220901corrFingerprint compromised private keys found in Docker images for authentication on the Internet and encompass deployments using, i.a., MQTT, SMTP, and PostgreSQL.
This widespread usage allows attackers to eavesdrop on confidential or alter sensitive information, e.g., from the IoT, webpages, or databases.
§ DISCUSSION, LIMITATIONS & MITIGATIONS
The outcome of our work has different aspects.
We have seen that numerous private keys are compromised by image creators publishing their images via Docker registries and shown that security relies on these secrets in practice.
Still, future work could investigate the limitations of our approach or implement the derived mitigation opportunities from our results.
View on Available Images
Due to rate and computation-time limits and comprehensive ethical considerations (cf. Appendix <ref>), we could not analyze all available images on Docker Hub and private registries.
Thus, we might have missed secrets included in single layers or complete images that were not subject to our study.
In this light, the absolute number of found secrets is already very alerting.
Also, in relative numbers, our results should be representative for the selected groups due to our sampling.
Yet, the selected groups, i.e., our Docker Hub search terms, might lead to skewed results overestimating the overall population.
For instance, images that are not targeted at protocols might have been created with fewer secrets.
Thus, we opted for a broad body of terms based on, i.a., public polls <cit.> to avoid any bias.
Moreover, our private registry analysis has not been targeted but included randomly sampled layers, and we still found a similar share of affected images as on Docker Hub.
As such, we believe that our relative results are—at least in their magnitude—representative for the overall population of Docker images publicly available.
Missing Methods to Check API Secrets
While relying on Internet-wide measurements was a suitable measure to assess the usage of compromised private keys for the authenticity of Internet-reachable services, we could not check whether found API secrets are functional.
The only option would be to contact the corresponding API's endpoint to check for the acceptance of found credentials.
However, due to our ethical considerations, we must not use found secrets as such usage might influence other systems or services.
Thus, we cannot validate them against their respective endpoint.
Still, the number of found secrets is worrying and looking at the usage of compromised private keys, we are convinced that many API secrets are also functional.
Causes & Mitigation Opportunities
We have seen both creators actively copying secrets from their local file system into the image, e.g., most of the API secrets but also private keys, incl. certificates, and passively generating key material during the image creation process, e.g., by installing an OpenSSH server.
Both behaviors lead to compromised secrets and affect the security of both image creators and users basing their containers on an image and already included secrets.
Most likely, creators and users are unaware of compromising or using compromised foreign secrets.
In fact, compared to GitHub, which provides a graphical interface to browse published files and potentially notice a mistakenly uploaded secret, files in Docker images and containers cannot be browsed easily, i.e., users barely get an overview on included files.
Furthermore, while Git repositories only include manually added files, images of Docker containers contain a complete system directory tree.
Thus, files with included secrets cannot be identified.
The mitigation of these problems must be two-fold.
On the one hand, image creators must be warned that they are uploading their secrets to (publicly reachable) Docker registries.
On the other hand, when deploying containers based on downloaded images, users should be informed that included secrets, especially private keys, might already be compromised, putting the authentication of deployed services at stake.
To this end, credential-finding tools such as TruffleHog <cit.> or SecretScanner <cit.> can be integrated on both sides of the Docker paradigm.
When uploading or downloading an image, these tools could then scan all layers of the image for included secrets.
To reduce the number of false positives, for potential API secrets, the tool can also check the secret's function against the respective endpoint (we think this is also ethically correct on the user's side who downloaded the image).
For private keys, the tools could maintain a list of test keys that are usually included in libraries.
Increasing the image creator's awareness regarding the leakage of such secrets should decrease their number in uploaded images.
Additionally, performing a second check at the user deploying a container based on a downloaded image should further decrease the number of services relying on already compromised secrets.
An additional help could be an API + graphical view for images on Docker Hub, which shows the included files.
This API could also enable third-party solutions similar to those for GitHub <cit.> to easily search for known secret file paths.
§ CONCLUSION
Containerization allows integrating applications and their dependencies in self-containing and shareable images making software deployment easy.
However, when focusing on security, sharing of secrets or using already compromised secrets breaks promises, e.g., authenticity or access control.
Thus, cryptographic secrets must not be included in publicly available container images.
Our analysis of numnonemptyimages images from Docker Hub and privatemeasurementnumtotalmax private registries revealed that, however, pctaffectedimages include secrets that should not be leaked to the public.
More specifically, we found a near-lower bound of validprivatekeyvalidnumdistinctmatchestotal private keys and apinumdistinctmatches API secrets.
validapicloudvalidnumdistinctmatchestotal API secrets belonging to cloud providers, e.g., apicloud1rule (apicloud1numdistinctmatches secrets), or validapifinancialvalidnumdistinctmatchestotal secrets to financial services, e.g., apifinancial0rule (apifinancial0numdistinctmatches secrets), show that attackers can cause immediate damage knowing these secrets.
Focusing on the leaked private keys, we find that these are also in use in practice: 20220901numuniquehosts TLS and SSH hosts on the Internet rely their authentication on found keys, thus being susceptible to impersonation attacks.
Notably, many private keys automatically generate when installing packages during image creation.
While beneficial when running on real hardware where every computer generates its own key, in container images, this process automatically leads to compromised secrets and potentially a sheer number of containers with compromised authenticity.
We further discover that especially private registries serve images with potentially sensitive software, most likely not intended to be publicly shared.
Additionally, these registries might not prevent write access enabling attackers to add malware to images.
Our work shows that secret leakage in container images is a real threat and not neglectable.
Especially the proven usage of leaked private keys in practice verifies numerous introduced attack vectors.
As a countermeasure, the awareness of image creators and users regarding secret compromise must be increased, e.g., by integrating credential search tools into the Docker paradigm.
Funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) — Research Project VeN2uS — 03EI6053K.
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy — EXC-2023 Internet of Production — 390621612.
ACM-Reference-Format
§ ETHICAL CONSIDERATIONS
Our research curates a comprehensive archive of leaked security secrets in Docker images on Docker Hub and private registries whose leakage is again a threat to security.
Moreover, to find private registries and deployments relying their security on leaked secrets, we leverage Internet-wide measurements that can have unintended implications, e.g., high load on single network connections impacting stability or alerting sysadmins due to unknown traffic.
Thus, we base our research on several ethical considerations.
First, we take well-established guidelines <cit.> and best practices of our institution as base for our research.
We handle all collected data with care and inform image creators and Docker Inc., to responsibly disclose our findings (cf. Appendix <ref>).
Moreover, we comply with recognized measurement guidelines <cit.> for our Internet-wide measurements reducing their impact (cf. Appendix <ref>).
§.§ Handling of Data & Responsibilities
During our research, we always only collect and request publicly available data, i.e., our access is limited to publicly available image repositories.
At no time do we bypass access control, e.g., by guessing passwords.
We, thus, cannot download private images.
Still, we revealed that many of the public images contain sensitive security secrets (cf. Section <ref>) which we stored for further analysis.
All found secrets are stored on secured systems.
Furthermore, we refrain from releasing our dataset including these secrets or image names, to not provide an archive of leaked secrets or pinpoints for potential attackers.
While this restriction prevents others from independently reproducing our results, we consider this decision to constitute a reasonable trade-off to protect affected users.
Responsible Disclosure
To further support affected users in removing their secrets from publicly available Docker images, we target to responsibly disclose our findings.
To this end, we extract e-mail addresses from maintainer variables set in Dockerfiles and furthermore derive addresses from Gravatar accounts linked to affected Docker Hub accounts.
In this regard, we identified numemaildisclosure e-mail addresses we contacted to notify about our possible findings.
Already after a few hours, we received >30 answers of owners appreciating our efforts, fixing their images or informing us that the image at hand is not used anymore.
A handful informed us that no secrets were leaked helping us to refine our filtering.
Moreover, we decided to reach out to the operator of Docker Hub, i.e., Docker Inc., to discuss potential further disclosure to unidentifiable creators.
§.§ Reducing Impact of Measurements
To reduce the impact of our active Internet scans, we follow widely accepted Internet measurement guidelines <cit.>.
Coordination
We coordinate our measurements with our Network Operation Center to reduce the impact on the Internet and to react correspondingly.
Abuse emails are handled informing about the intent of our measurements and how to opt-out of our measurements.
As part of this opt-out process, we maintain a blocklist to exclude IPs from our measurements.
External Information
For giving external operators information about our research intent, we provide rDNS records for all our scan IPs and transmit contact information in the HTTP header of each request to the registries.
Moreover, we host a webpage on our scan IPs, which gives further information on our project and how to opt-out.
Over time, also due to other measurements, we excluded 5.8 M IP addresses (0.14% of the IPv4 address space).
Limiting Load
To limit load and stress on all systems involved (along the path and the end-host), we deliberately reduce our scan-rate.
Our scans are stretched over the course of one day and use 's address randomization to spread load evenly.
We further limit the load on single private registries when downloading available images.
While we paid to increase the existing rate limiting for image downloads on Docker Hub (cf. Appendix <ref>), private registries typically do not implement any rate limiting.
Hence, to prevent our scanner from overloading registries running on resource-constrained hardware or connected via slow or volume-billed Internet connections, we decide to only download image layers randomly until their size sums up to at most 250.
Additionally, we shuffle the downloads of layers of different registries to further distribute the load.
§.§ Overall Considerations
Without taking our goals into account, summarizing the sensitive nature and the impact of our measurements can quickly lead to the conclusion that our measurements are not beneficial.
However, we consider it public interest and fundamental for improving security to know about potential security issues and how widespread these are.
The Docker paradigm does not include any mechanisms to prevent image creators from (accidentally) adding security secrets to their images and no mechanisms exist that warns users relying on already compromised security secrets.
Hence, we consider it essential to know whether secrets are widely included in publicly available Docker images and whether these are in use at scale to steer future decisions for counter-measures.
To answer this question, we carefully weighed the impact of our measurements against their benefit and have taken sensible measures to reduce the risks of building a large archive of leaked security secrets and risks introduced by active Internet measurements.
§ IMAGE DOWNLOAD FROM DOCKER HUB
The limit of image manifest downloads from Docker Hub depends on the booked plan, e.g., free users are allowed to pull only 800 images per day.
Hence, for a faster analysis of images on Docker Hub, we purchased two Pro accounts, that allow 5000 image downloads per day each.
Still, we are required to perform our analysis on a subpart of available images as the download of one image of every of the 9321726 available repositories would require 933 days under best conditions.
Thus, we decided to limit our analysis on two categories:
[(i)]
* a context of standard protocol and frequently used technologies, and
* an (Industrial) IoT context for comparison.
Both categories have communication in common as here security can be affected on an Internet scale.
Standard Context
To generate a wide view on secret leakage in Docker images, we create a list of search queries comprising standard protocols <cit.>, and frequently used technologies <cit.>.
To find related images, we employ Docker Hub's API to perform searches over all available images and retrieve results users would retrieve when using the CLI command or Docker Hub's web interface.
To ensure that different handling of special characters in technology and protocol names does not exclude any images, we include different spelling variants in our query list, i.e., we include terms as they are, but also replace non-alpha-numeric characters by and space.
Table <ref> (top) shows our constructed search queries for the standard context.
(Industrial) IoT Context
We extend our analysis on images in the (Industrial) IoT context, as deployments in this area showed massive security deficits in past <cit.>, in single cases traced back to security secret leakage via GitHub and Docker images <cit.>.
As search terms, we take (Industrial) IoT protocol names that were subject to recent research <cit.>.
We proceed similar as in the standard context, i.e., include derived spellings of these terms, and show our constructed search query of this context in Table <ref> (bottom).
§ REGULAR EXPRESSIONS
Following already established procedures to find security secrets in code repositories <cit.>, we build our secret detection in Docker Images on regular expressions, i.e., we try to match regular expressions derived from secrets on the content of included files.
Table <ref> shows our composed list of regular expressions covering a variety of secrets, i.e., asymmetric private keys and API keys, as well as accompanying material we use for our analysis, i.e., public keys and certificates.
We orientate our expressions towards related work <cit.> and TruffleHog <cit.>, an established tool to find secrets in various sources, i.e., the local file system, Git repositories, S3 storages, and syslogs.
Specifically, we inherit Meli et al.'s <cit.> regular expressions to allow comparisons between the occurrence of leaked secrets in GitHub repositories at scale and our findings.
Furthermore, they composed their expressions comprehensibly, i.e., they included API keys for certain services by the occurrence of service domains in Alexa's Top 50 Global and United States lists in combination with a list of well-known APIs manually filtered for services with a high risk on key leakage and keys with a distinctive signature (to reduce the number of false-positives).
For private keys they focus on the most prevalent types and form to store, i.e., RSA, elliptic curve keys, PGP, and general keys in PEM format.
To spread our analysis and align our expressions to the scope of our search queries (cf. Appendix <ref>), we adapt our expression for private keys to match every type of private key in PEM format and, furthermore, extend the list of expressions to also match private key blocks, keys in PKCS7 format, and keys stored in XML format (due to their unambiguous signature).
Regarding API secrets to match, we extend our list with expressions from TruffleHog <cit.> on basis of services being currently trending under developers <cit.> or having a high risk for misuse and the regular expressions including a unique signature (also to reduce the number of false positives).
For some services we found more than one type of secret, i.e., secrets for different API versions (GitHub v1 and v2), or different types of keys (Stripe).
Our final list contains 48 expressions which we match on the content of every file in the images part of our study.
§ FILTERING BASED ON FILEPATHS
After matching our regular expressions on arbitrary file content available in Docker images, extensive filtering is required to exclude false positive matches, i.e., matches that do not contain any secret.
Our File filter bases on file paths derived from matches our Kompromat filter excluded, i.e., all parent directories under which we find more than 2/3 test keys known by kompromat <cit.> and all directories that include known test keys directly.
Additionally, it takes manually compiled file paths, e.g., where standard libraries reside () or package managers store their downloads (e.g., ) and extensions of database files (e.g., and ) into account which we selected after manually revisit all matches as these produced a high number of false positives.
Figure <ref> shows the seven most prevalent file paths that contain matches excluded by our File filter.
Indeed, most of the exclusions are matches included in folders belonging to package managers and thus most likely test secrets.
The massive filtering of API secret matches in is due to the high number of false positives of the Twitter regular expressions on database files.
|
http://arxiv.org/abs/2307.05000v1 | 20230711034010 | Neural Point-based Volumetric Avatar: Surface-guided Neural Points for Efficient and Photorealistic Volumetric Head Avatar | [
"Cong Wang",
"Di Kang",
"Yanpei Cao",
"Linchao Bao",
"Ying Shan",
"Song-Hai Zhang"
] | cs.CV | [
"cs.CV"
] |
[
Neural Point-based Volumetric Avatar: Surface-guided Neural Points for Efficient and Photorealistic Volumetric Head Avatar
Cong Wang^1, Di Kang^2, Yanpei Cao^2, Linchao Bao^2, Ying Shan^2, Song-Hai Zhang^1
^1Tsinghua University, ^2Tencent AI Lab
August 12, 2023
=================================================================================================================================
type=figure
< g r a p h i c s >
figure
explores point-based neural representation combined with volume rendering, achieving high-fidelity facial animations (images and depth maps) while maintaining efficiency comparable to mesh-based methods (Tab. <ref>).
During training, can adaptively allocate more points to challenging facial regions, forming a thicker “shell” (i.e., a higher variance of projected distances onto the face in the normal direction) and increasing capacity as needed.
In the leftmost image, we show our rendering and the ground truth (GT) side-by-side on the right and left parts, respectively.
Observe the close resemblance between our rendering quality and the GT.
]
Rendering photorealistic and dynamically moving human heads is crucial for ensuring a pleasant and immersive experience in AR/VR and video conferencing applications.
However, existing methods often struggle to model challenging facial regions (e.g., mouth interior, eyes, hair/beard), resulting in unrealistic and blurry results.
In this paper, we propose (), a method that adopts the neural point representation as well as the neural volume rendering process and discards the predefined connectivity and hard correspondence imposed by mesh-based approaches.
Specifically, the neural points are strategically constrained around the surface of the target expression via a high-resolution UV displacement map, achieving increased modeling capacity and more accurate control.
We introduce three technical innovations to improve the rendering and training efficiency: a patch-wise depth-guided (shading point) sampling strategy, a lightweight radiance decoding process, and a Grid-Error-Patch (GEP) ray sampling strategy during training.
By design, our is better equipped to handle topologically changing regions and thin structures while also ensuring accurate expression control when animating avatars.
Experiments conducted on three subjects from the Multiface dataset demonstrate the effectiveness of our designs, outperforming previous state-of-the-art methods, especially in handling challenging facial regions.
§ INTRODUCTION
Realizing photorealistic rendering of an animatable human head is a pivotal goal in computer graphics and vision,
which has broad applications such as AR/VR communications <cit.>, gaming <cit.>, and remote collaboration <cit.>.
However, providing a satisfying and immersive experience in these applications remains immensely challenging due to our innate ability to express and perceive emotions through subtle facial cues <cit.>.
Existing data-driven learning methods often generate noticeable artifacts in the mouth area and unrealistic hair textures <cit.>.
This limitation is primarily attributed to their underlying mesh-based representations, since the predefined mesh has a fixed topology and limited discretization resolution.
To illustrate this limitation, consider the Deep Appearance Model (DAM) <cit.> that decodes a head mesh and the corresponding view-specific texture UV map.
Despite achieving high-quality rendering results for the skin regions, DAM produces conspicuous artifacts in the mouth and hair regions due to inaccurate correspondences across frames (e.g., mouth interior) and the mesh's inability to model thin structures (e.g., hair).
To alleviate these issues, Pixel Codec Avatar (PiCA) <cit.> proposes the use of neural textures, allowing the subsequent neural renderer to address inaccurate shape estimation and topological inconsistencies (e.g., closed/open mouth), attaining moderately improved facial geometry and renditions for the mouth region.
Similarly, Mixture of Volumetric Primitives (MVP) <cit.> attaches volumetric primitives (predicted by a CNN) to mesh vertices, replacing vertex colors with more flexible volumetric primitives.
However, these enhanced “texture”-like representations remain embedded in a predefined topology and tend to produce blurry results if inaccurate correspondences occur.
Therefore, we propose () that leverages highly flexible neural points <cit.> and versatile neural volume rendering, enabling sharper rendering for both topologically changing geometries (e.g., mouth interior) and translucent thin hair/beard structures.
To create animatable head avatars, another key technical challenge lies in enhancing the controllability of neural points to generate accurate target expressions.
To address this, we reconstruct an intermediate coarse geometry (represented as a UV position map) of the driving signal (i.e., target expression) and constrain the movable neural points close to its surface.
Further, to maximize the benefits of neural volume rendering, we introduce an additional displacement map, which allows the points to move to more optimal positions around the surface.
For instance, after training, more points are located inside the mouth, resulting in a thicker “point shell” and increased modeling capability for volume rendering.
Efficient rendering and training are also indispensable for practical applications.
To this end, we propose three technical innovations.
(1) We introduce a novel depth-guided <cit.> sampling method that incorporates local depth context information (i.e. a patch), achieving more realistic rendering while reducing the rendering time by ∼ 10 × compared to the vanilla NeRF.
(2) We develop a lightweight radiance decoding process that eliminates unnecessary per-point processing used in <cit.>, significantly improving rendering efficiency (∼ 7 ×) and offering better generalization in our dynamic modeling task.
(3) Lastly, to speed up training, we propose a novel Grid-Error-Patch (GEP) ray sampling strategy comprising three stages: a uniform grid-sampling stage for rapid initialization of a coarse result, an error-based importance sampling that delves into more challenging regions, and a patch-based stage to impose high-level perceptual image losses.
Our contributions are summarized below:
* We propose a novel volumetric representation based on neural points that are dynamically allocated around the surface (i.e., the target expression) for animatable head avatar creation.
This representation is inherently capable of better handling thin geometry (e.g., hair) and topological changes (e.g., mouth open and close).
* We introduce three technical innovations to enable efficient rendering and training, including
a patch-wise depth-guided shading point sampling method,
a lightweight radiance decoding process,
and a Grid-Error-Patch training strategy.
* Experiments on the Multiface dataset show that our approach produces higher-quality images for novel expressions and novel views while being ∼ 70 × faster than NeRF.
§ RELATED WORKS
Continuous Neural Representations.
Neural implicit representations, such as ONet <cit.>, DeepSDF <cit.>, and NeRF <cit.>, parameterize a continuous occupancy/SDF/radiance fields with a network (e.g. a multilayer perceptrons, MLP).
The property (e.g. occupancy, color/density) of any location in the scene could be queried by feeding its coordinate (optionally along with specified viewing direction) to the MLP, which is also referred as the coordinate network.
Rapid progresses have been achieved on shape modeling <cit.> and appearance modeling <cit.>.
Later, these neural implicit representations are extended to handle dynamic scenes (e.g. a talking head) by introducing some deformation techniques.
For example, NerFace <cit.> conditions the NeRF on 3DMM coefficients, making a small step to animatable avatar creation.
DFA-NeRF <cit.> further conditions NeRF with disentangled face attribute features and enable more detailed control over the talking head.
Methods using neural implicit representations assume no fixed topology and have infinite resolution (in theory), achieving impressive novel view synthesis results for dynamic scenes.
However, they usually suffer from various artifacts (e.g. blur, distortion) and inaccurate control when rendering novel expressions/poses, which is critical for creating high-quality animatable avatars.
In contrast, our approach employs an explicit point-based representation driven by a coarse surface (represented as a UV position map) of the novel expression, achieving better generalization on novel expression and precise expression control.
Discrete Neural Representations.
Common discrete representations in CV/CG include mesh, regular grid/voxel, and irregular point cloud.
Recently, these representations have been extended with neural features to not only reconstruct the shape, but also create photorealistic image renderings thanks to the rapid progress in differentiable neural rendering <cit.>.
For example, neural volumes <cit.> learns a radiance field in the canonical space and introduce another warp field to handle dynamic scenes.
However, regular grid/voxel-based representations usually suffer from the required cubic memory footprint and slow rendering speed due to the processing of empty voxels.
Another line of works use mesh-based neural representations <cit.>.
For example,
PiCA employs neural UV texture map to render the head avatar in different expressions from any given view point.
Neural head avatar <cit.> introduces additional geometry and texture networks in complementary to a base FLAME <cit.> head model, resulting in better shape and texture modeling.
However, mesh-based representations require accurate surface geometry and semantically consistent correspondence, which is usually hard to acquire.
Alternatively, MVP combines mesh and grid-based representations, and uses the volume rendering technique to rendering images.
But the color and density of MVP's primitives are decoded from 2D CNNs and further aggregated to obtain the radiance of query points, which inevitably causes blurry renderings.
As a result, we propose to explore the more flexible neural point-based representation.
Concurrent to our work, PointAvatar <cit.> also uses a point-based representation.
Different from ours, their points only store color information and use splatting for rendering.
We utilize powerful volume rendering and thus can naturally render hairs in higher quality.
Differentiable Neural Rendering.
Rapid progress in differentiable neural rendering <cit.> makes analysis-by-synthesis pipeline more powerful than before and enables avatar creation from even a short monocular video <cit.>.
For example, deferred neural rendering <cit.> proposes a neural texture map, which is decoded by a neural renderer into high quality renderings from any viewpoint.
Recently, neural volume rendering <cit.> achieves impressive high-quality renderings by introducing volume rendering to neural implicit fields.
What's more, volume rendering can naturally model translucent objects (e.g. hair, smoke) in very high fidelity, which is very suitable for head modeling.
However, a major drawback of neural volume rendering is low rendering efficiency.
Our approach employs volume rendering to ensure high-fidelity visual results, and propose a series of acceleration methods to achieve rendering speed matching neural texture rendering.
§ METHODS
An overview of our () is shown in Fig. <ref>.
Given a latent code corresponding to the target facial state, we employ three decoders to generate a position map Ĝ_o, a displacement map Ĝ_d, and a feature map F̂, respectively.
Our representation is constructed from these maps, followed by a point-based neural volume rendering to efficiently produce high-fidelity images and detailed depth maps from any viewpoint.
In Sec. <ref>, we introduce our representation that leverages flexible point clouds for improved modeling of topological changes and thin structures, and utilizes volume rendering to produce high-fidelity images.
In Sec. <ref>, we present our efficient rendering and training strategies, enabling to render photorealistic images comparable to NeRF while being ∼ 70 × faster.
Sec. <ref> details our training losses.
Sec. <ref> describes our network architectures and implementation details.
§.§
§.§.§ Neural Radiance Field
NeRF <cit.> effectively encodes a static scene using a Multi-Layer Perceptron network (MLP), achieving unprecedented high-quality novel view synthesis results.
During rendering, the trained MLP takes the scene coordinates x∈ℝ^3 and viewing direction (θ, ϕ) as input and produces the corresponding density σ_x and view-dependent color c_x.
The final color of an image pixel is obtained via volume rendering (Eq. (<ref>)), which integrates all shading points on the ray that passes through this image pixel.
Mathematically, NeRF uses piece-wise constant density and color as an approximation:
c = ∑^N_i=1 T_i (1 - exp (- σ_i δ_i)) c_i,
T_i = exp(-∑^i-1_j=1σ_j δ_j)
where σ_i and c_i denote density and color, respectively,
and δ_i is the distance between two adjacent shading points.
§.§.§ Animatable Neural Points
We employ explicit neural points in our to achieve more controllable deformation of the underlying implicit neural radiance field.
Leveraging this geometry proxy can also significantly enhances the rendering efficiency (Sec. <ref>).
Inspired by Point-NeRF <cit.>, our representation consists of a set of neural points, denoted as 𝒜 = {(p_i, f_i) | i=1, …, N}, where p_i ∈ℝ^3 denotes the location of point i, and f_i represents its associated feature.
In our network, these features are learned in a variational auto-encoding style, similar to DAM <cit.> and PiCA <cit.>, to create an animatable avatar.
Specifically, the geometry sub-network in the decoder predicts a UV position map Ĝ_o and a UV displacement map Ĝ_d.
Ĝ_o stores the vertex positions of a coarse head mesh. We apply additional intermediate supervision on the 256^2 position map Ĝ_o to obtain better expression control.
To achieve better expressiveness of the point-based neural radiance field, we upsample the position map Ĝ_o to 1024^2 and incorporate the high-resolution displacement map Ĝ_d to compensate for inaccuracy contained in the coarse geometry. By compositing the upsampled Ĝ_o and Ĝ_d, we determine the positions of the neural points.
The final neural points can adjust their positions adaptively around the surface, as we only apply a regularization term to penalize unreasonably large displacements.
§.§.§ Lightweight Radiance Decoding
The radiance (i.e., color c_x and density σ_x) at position x∈ℝ^3 is extracted based on (up to) its K nearest neighboring points and a neural decoding MLP (see Fig. <ref>), inspired by prior methods <cit.>.
To increase efficiency and ensure better generalization on novel expressions, we design a novel lightweight radiance decoding process that directly aggregates the neural points and pass this “average” feature to a lightweight neural decoding network, achieving ∼ 7 × speedup (compared to Point-NeRF) and better renderings.
Specifically, for a given shading point, we compute a weighted average of the features and relative positions of (up to) its K nearest neural points inside a sphere of radius R as in Point-NeRF:
f_x = ∑_i w_i/∑ w_if_i, v_x = ∑_i w_i/∑ w_iv_i
where v_i ∈ℝ^6 is the position encoding of the displacement vector between neural point p_i and the shading point x,
w_i = 1/|| p_i - x ||_2 is the weight that is inversely proportional to the Euclidean distance between x and p_i.
Then, two shallow MLPs are applied to decode this average feature to density σ_x and view-dependent color c_x:
ℱ_d: (f_x, v_x) →σ_x, ℱ_c: (f_x, v_x, d) →c_x
Positional encoding <cit.> is applied to every dimension of the input vector in ℱ_d and ℱ_c.
Finally, a pixel color is obtained via the integration in Eq. (<ref>).
The primary difference between our radiance decoding and previous methods <cit.> is that we omit the per-point feature processing MLP before the feature aggregation in Eq. (<ref>).
This modification provides two benefits: it results in ∼ 7× speedup and improved generalization to unseen expressions in our dynamic modeling task (Fig. <ref>).
§.§ Efficient Rendering and Training
§.§.§ Patch-wise Depth-guided Volume Rendering
Since we have some prior knowledge about the scene (i.e., a head and its coarse shape), we can focus on sampling the shading points around the surface, significantly improving the rendering efficiency compared to the original NeRF.
We propose a patch-wise depth-guided sampling strategy, taking into account that the envelope of a head mesh is often not very accurate and the visible facial parts could appear on different depth levels (e.g., jaw and neck).
Specifically, we define a fixed connectivity on the 256^2 position map Ĝ_o so that we can easily rasterize a depth map D_rast.
For a ray passing through [p_x, p_y], we consider a local depth patch centered on it
and obtain the minimum and maximum depth values D_min and D_max.
In our implementation, we consider only nine pixels, i.e., {D_rast (p_x + i, p_y + j) | i, j ∈{-s, 0, s}}, where s is a hyper-parameter set to 16 on 1024×667 images.
If D_max - D_min < δ_d (smaller than a threshold), there is only one depth level (e.g., no sudden depth changes like jaw and neck), and we use d_c = (D_max + D_min) / 2 as our sampling center.
If D_max - D_min≥δ_d, there are likely two depth levels, and we split our budget equally between them and sample shading points around D_min and D_max separately.
For every depth level d_c, we uniformly randomly sample points p(t_i) within evenly spaced bins t_i centered at d_c as follows:
p(t_i) = o + t_i d,
t_i ∼𝒰[d_c + (2(i-1)/N - 1) r, d_c + (2 i/N - 1) r]
where o is the optical center,
d indicates the view direction,
𝒰 represents uniform distribution,
r is the sampling radius and is set to 20 in our implementation,
and N denotes the number of sampling points.
Discussion: E-NeRF <cit.> proposes a similar pixel-wise depth-guided sampling method, reducing the required shading points for volume rendering.
However, the pixel-wise depth-guided sampling method only considers the depth of the current pixel and cannot properly handle facial parts that appear on different depth levels (e.g., beard), yielding suboptimal results like mesh-based methods (see Fig. <ref>).
§.§.§ GEP Training Strategy
For head image rendering, artifacts usually appear on several difficult but small regions (i.e., mouth, eyes), Therefore, uniformly sampling rays covering the entire head region is inefficient.
To address this issue, we propose a three-stage ray sampling strategy that consists of a Grid-based uniform sampling stage to initiate the training, an Error-based importance sampling stage to refine the challenging regions,
and a Patch-based sampling stage to improve perceptual quality.
Grid-Sample Stage (G-Stage)
In this stage, we prioritize full coverage of the images.
The image is split into equal-sized grids without overlap, and we randomly sample one ray per grid
G-Stage ensures uniform sampling across all regions and generates an initial model that produces reasonable results for all regions of the image.
Moreover, we keep track of the error for each grid, obtaining an error map E (8×8) for later error-based importance sampling.
Error-Sample Stage (E-Stage)
During this E-Stage, we adjust the sampling probabilities of the grids based on the grid error map initialized in the previous G-Stage.
Specifically, the sampling probability of a grid region is proportional to its grid error, resulting in an error-based importance sampling similar to <cit.>.
As a result, we allocate more computing budget to difficult facial regions (e.g., mouth interior, hair, and eyes in "Error Sample" of Fig. <ref>), significantly improving image quality within the same number of training epochs (see Fig.<ref>).
Note that we maintain an error map of smaller size in this stage and dynamically update the sampling probability.
Patch-Sample Stage (P-Stage)
In this stage, we sample rays belonging to an image patch (instead of individual pixels) so that we can apply patch-based perceptual loss <cit.>.
Using a perceptual loss along with per-pixel losses (e.g., L_2) can help reduce image blur, resulting in sharper images and visually better results <cit.>.
§.§ Training Losses
We have a set of images { I^(i)}, i ∈{1, 2, ..., N},
their corresponding tracked meshes {ℳ^(i)}
, UV position maps {G^(i)_o } converted from {ℳ^(i)}, and reconstructed depth maps {D^(i)} using Metashape software <cit.>.
Our training losses include
per-pixel photometric loss ℒ_pho,
patch-based perceptual loss ℒ_per <cit.>,
coarse mesh loss ℒ_m,
two depth losses ℒ_d and ℒ_rd,
and three regularization losses ℒ_s, ℒ_disp and ℒ_kl.
The appearance losses are defined as:
ℒ_pho = ∑_p ∈𝒫 || I^(i)_p - Î^(i)_p ||_2, ℒ_per = LPIPS (I^(i)_𝒫, Î^(i)_𝒫)
where, Î is our rendering image, and 𝒫 is a set of image coordinates used to train our networks.
Note that ℒ_per is only used in the last P-stage due to its patch-based property.
The geometric losses, which help generate more controllable head avatars, are defined as:
ℒ_m = || G_o - Ĝ_o ||_2,
ℒ_rd = || (D^(i) - D_rast^(i)) ⊙ M_D_rast ||_1,
ℒ_d = ∑_p ∈𝒫 || (D^(i)_p - D̂^(i)_p) ⊙ M_D ||_1
where Ĝ_o is the decoded 256^2 position map,
and D_rast is a coarse depth map rasterized using Ĝ_o,
D̂ is our fine depth map obtained using volume rendering.
Note that depth masks M_D and M_D_rast are used to only penalize those pixels whose depth errors are less than a depth threshold δ_D (set to 10mm) to handle outliers.
We also include three regularization losses to improve our model's generalization ability.
ℒ_disp = || G_d ⊙ M_G_d ||_2 is a regularization term on the displacement map to constrain the final points close to the surface and prevent overfitting,
where M_G_d is a mask to only penalize the points whose displacement values are larger than δ_disp (set to 10mm).
ℒ_s is a Total Variation (TV) loss applied on the 256^2 UV position map G_o to encourage a smooth surface.
ℒ_kl is the common Kullback-Leibler (KL) divergence prior applied on the latent space in VAE training.
In summary, the complete training loss is the weighted sum of these loss terms:
ℒ = λ_phoℒ_pho + λ_perℒ_per + λ_d ℒ_d + λ_rdℒ_rd
+ λ_m ℒ_m + λ_s ℒ_s + λ_dispℒ_disp + λ_klℒ_kl
§.§ Network Structures & Implementation Details
Our network is trained in a variational auto-encoding fashion <cit.> following DAM and PiCA <cit.>.
The encoder comprises 5 and 7 convolution layers (with the last 5 layers shared) respectively and encodes a UV position map, which is converted from a coarse tracked mesh (∼ 5K vertices) and an average texture map into a latent code z∈ℝ^8 × 8 × 4 as in PiCA <cit.>.
The average texture map is obtained from an open-mouth expression by averaging the unwrapped textures of all camera views.
Note that the tracked mesh does not contain vertices for tongue and teeth.
The decoder contains 5/7/7 convolution layers and predicts a position map, a displacement map, and a feature map, all of which are used for later radiance decoding.
The position map G_p at 1024^2 represents a coarse (i.e., less detailed) surface as it is upsampled from G_o at 256^2, which is supervised by the input coarse mesh ℳ.
The displacement map G_d at 1024^2 increases details to compensate for lost geometric details of the coarse mesh ℳ during estimation.
The 32-D feature map F at 1024^2 contains local appearance information around the point for radiance decoding.
We set loss weights of {λ_pho, λ_per, λ_d, λ_rd, λ_m, λ_s, λ_disp, λ_kl} in Eq. (<ref>) as {5, 0.1, 0.1, 0.2, 0.2, 1, 0.1, 0.001 } respectively.
And the G/E/P-stages take 10/15/5 epochs, respectively.
§ EXPERIMENTS
We test on the Multiface dataset <cit.>.
It is an open-sourced multi-view human face dataset that captures high-quality facial details from a camera array.
Processed data include calibrated camera parameters, tracked meshes, and unwrapped UV texture maps (1024 × 1024).
We obtain a depth map for each frame individually using Metashape <cit.> based on the provided camera parameters.
Following MVP <cit.>, we use downsampled images (1024 × 667) during training.
Unless stated otherwise, all our experiments are trained on a subset of expressions and tested on held-out expressions (15 randomly chose expressions and fixed), resulting in ∼11K frames for training and ∼1K frames for test.
Following PiCA <cit.>, MSE and PSNR are calculated based on image pixels under the rasterized mask for evaluation.
§.§ Comparisons with State-of-the-Art Methods
We compare with DAM, PiCA, and MVP, to demonstrate the superiority of our approach via the rendered images under novel expressions.
In Tab. <ref>, achieves the best PSNR (up to 1.13 over the 2^nd best).
Fig. <ref> and Fig. <ref> demonstrate that produces more realistic facial renditions, especially in challenging facial regions (e.g., eyes, hair, mouth interior).
Refer to our Supp. Mat. for video comparisons.
§.§ Ablation Studies
In this section, we present a series of ablation studies to verify the effectiveness of our major design choices.
Effect of using different numbers of points.
We investigate the impact of using different numbers of points on the rendering quality and inference time in Tab. <ref> (a.1)-(a.3).
The visual results are shown in Fig. <ref>.
When the point number is small (i.e., -256), generates poor results (46.45 MSE) and contains holes due to insufficient point resolution.
With increased point number (-1k), we notice obvious improvement (from 46.45 to 26.36 MSE) and do not see holes, indicating the point resolution is sufficient.
Further increasing the point number (-2k) results in perceptually very similar results and slightly worse MSE (27.36 vs. 26.36).
Importance of using an extra displacement map.
Introducing an extra displacement map reduces MSE from 26.36 to 23.70 (Tab. <ref> (a.2)).
A visual comparison is provided in Fig. <ref>.
We can also see using a displacement map is much more effective than using more points.
This is because using a displacement map enables a more flexible arrangement of the neural points so that they can not only move on the surface (i.e., along the tangent plane) but also move along the normal direction, forming a thicker shell with increased capacity (see Fig. <ref>).
Influence of the lightweight radiance decoding process.
Using the proposed lightweight radiance decoding in Sec. <ref> not only greatly reduces the inference time (3129 vs. 482 ms), but also improves the quality (23.70 vs 24.51 MSE).
A visual comparison is provided in Fig. <ref>.
A possible explanation is that using networks with too much capacity may cause overfitting and hinders the generalization on novel expressions for dynamic scene modeling tasks.
Different shading point sampling methods.
We investigate the impact of different shading point sampling methods on a given ray.
Tab. <ref> (c.1, c.2) present results obtained using different sampling methods.
Our patch-wise depth-guided sampling method reduces MSE from 24.92 to 23.70
compared to pixel-wise depth-guided (c.2) proposed in <cit.>.
Using “Pixel-Depth” easily causes inaccuracies, especially for the jaw region that contains two different depth levels, resulting in a mesh-like beard rendering (see Fig. <ref>).
Training with a naive sampling strategy is slower and could not give a similar result (406.49 MSE) compared to the depth-guided sampling methods when using only 20 sample points per ray (∼200 in NeRF).
GEP training strategy.
We compare our GEP ray sampling strategy (Sec. <ref>) with a naive alternative that always uses uniformly sampled rays.
GEP achieves lower MSE (23.70 vs. 30.08) in Tab. <ref> (d).
The training loss curves and visual comparisons are shown in Fig. <ref>.
The model trained with the naive strategy converges to a sub-optimal solution.
Although achieving satisfactory renderings in smooth facial regions (e.g., skins), it struggles to handle challenging facial regions (e.g., eyes and mouth interior).
In contrast, the model trained with our GEP strategy allocates more computing budget to these difficult regions and obtains more realistic facial renditions.
§.§ Analysis on Volumetric Methods
We compare with single-frame NeRF fitting <cit.>, which can be viewed as the upper limit of different volumetric avatars.
The results are shown in Tab. <ref> and Fig. <ref>.
On single-frame fitting, our NPVA generates high-fidelity results comparable to NeRF while being ∼ 70 × faster (524 vs. 38392 ms) during inference.
What's more, our can handle dynamic scenes (a 49-frame sequence) effectively with minor performance drop.
In Fig. <ref>, we notice also generates visually better results (e.g., sharper and more realistic reflection effects) than NeRF, possibly due to the help of coarse geometry prior and the perceptual loss.
§ CONCLUSION & DISCUSSION
In this paper, we present a novel volumetric representation based on movable neural points for animatable avatar creation, focusing on both high-quality rendering and time efficiency.
To ensure controllability and accurate expression control, we guide point locations with decoded coarse meshes of target expressions and constrain the points around the surface, which is supervised with the driving signal.
To further enhance rendering quality, we increase the point number and incorporate an additional displacement map that adaptively adjusts after training.
Moreover, our approach features three technical innovations tailored to improve training and rendering efficiency: lightweight radiance decoding, patch-wise depth-guided sampling, and a GEP training strategy.
Limitation.
We rely on coarse mesh tracking for modeling and optimization, which generally works well but does not account for very long hair or diverse hairstyles, such as those not present in tested female subjects. When relaxing the regularization on the displacement map for these cases, our method tends to produce blurry results for novel expressions (Fig. <ref>b).
ieee_fullname
|
http://arxiv.org/abs/2307.05649v1 | 20230711132516 | Bayesian Poisson Regression and Tensor Train Decomposition Model for Learning Mortality Pattern Changes during COVID-19 Pandemic | [
"Wei Zhang",
"Antonietta Mira",
"Ernst C. Wit"
] | stat.AP | [
"stat.AP"
] |
Destructive effect of fluctuations on the performance of a Brownian gyrator
Gleb Oshanin
August 12, 2023
===========================================================================
COVID-19 has led to excess deaths around the world, however it remains unclear how the mortality of other causes of death has changed during the pandemic. Aiming at understanding the wider impact of COVID-19 on other death causes, we study Italian data set that consists of monthly mortality counts of different causes from January 2015 to December 2020. Due to the high dimensional nature of the data, we develop a model which combines conventional Poisson regression with tensor train decomposition to explore the lower dimensional residual structure of the data. We take a Bayesian approach, impose priors on model parameters. Posterior inference is performed using an efficient Metropolis-Hastings within Gibbs algorithm. The validity of our approach is tested in simulation studies. Our method not only identifies differential effects of interventions on cause specific mortality rates through the Poisson regression component, but also offers informative interpretations of the relationship between COVID-19 and other causes of death as well as latent classes that underline demographic characteristics, temporal patterns and causes of death respectively.
Keywords: COVID-19, mortality, tensor decomposition, Bayesian inference
§ INTRODUCTION
Following the outbreak, COVID-19 has led to far-reaching consequences on various aspects of the world <cit.>. Focusing on its impacts on health and health systems, extensive studies have investigated topics such as health inequality as a result of racial and social-economic statues <cit.>, adaptation of health care system in terms of testing, contact tracing and vaccination campaign <cit.>. Excess mortality due to the pandemic is also under scrutiny as it generates the overall picture of the impacts the pandemic has on the human health through various channels like government lockdown interventions, disruptions to non-COVID care and son on. The mortality pattern shift compared to pre-COVID era depends on the potential joint effects of all these factors <cit.>. Even though excess mortality is sufficient to grasp the general view, it is also of great importance to examine cause specific mortality changes in face of the pandemic so that strategies to mitigate similar impacts in the future can be more targeted. For instance, the pandemic may have indirectly led to increases in causes of death including heart disease, diabetes and Alzheimer disease as observed by <cit.>. As for non-natural causes of death, <cit.> found out that accidental drug-related fatalities increased substantially while homicide or suicide rates only moderately, nor did motor vehicle collision fatality rates greatly decrease during all stages of the lockdown in Ontario. However, evidences also suggest that suicide rates increased during the pandemic <cit.> whereas deaths related to traffic accidents decreased significantly according to <cit.> and <cit.>. However, it is usually challenging to collect cause specific mortality data based on death certificates in a consistent manner <cit.>. We analyze the Italian monthly death counts from 2015 to 2020 categorized according to the International Classification of Diseases 10th Revision (ICD-10), see Section <ref> for more data description.
When the count data are assumed to follow Poisson distributions, the Poisson regression model is a good starting point <cit.>. In practice, we can exploit other properties of the data and develop more sophisticated modeling tools in addition to the Poisson regression so that we learn more from the data. This is particularly important when the dimension of the observations is large or when we are not able to observe, collect all relevant covariates or when we suspect more complicated relationships between covariates and the outcome variable. The Italian mortality data can be rearrange as a multi-way array or tensor which facilitates us to subtract extra information hidden in the data thanks to its well studied theoretical properties, in fact, tensors have been shown to be a powerful tool in many disciplines such as political sciences, biology, economics and so on <cit.>, therefore we utilize those properties, combine the Poisson regression with the tensor perspective in this applied work. Our primary interests lie in understanding the effects of covariates, especially government lockdown policies during the pandemic, on the mortality rates of various causes of death through the Poisson regression specification as well as uncovering further information in the data by inferring latent spaces via the tensor construction. Inferences are made in a Bayesian framework where we impose trivial priors on model parameters and employ a Metropolis within Gibbs sampler to draw posterior samples.
The rest of the paper is organized as follows. In Section <ref>, we formulate the model and elucidate how to obtain dimension reduction via a tensor train decomposition. In Section <ref>, we describe the prior specification and the Markov chain Monte Carlo (MCMC) algorithm for posterior inferences. Results of the simulation studies as well as the real data application are shown in Section <ref> and Section <ref>, respectively. Finally, Section <ref> provides some concluding remarks and future work.
§ BAYESIAN POISSON REGRESSION AND TENSOR TRAIN DECOMPOSITION MODEL FOR COUNT DATA
When high-dimensional data can be organized as tensors, to achieve dimension reduction and exploit inherent structure embedded in the data, researchers have developed numerous decomposition techniques. In this paper, we introduce the tensor train decomposition which has both theoretical and practical advantages <cit.>. In general, an M-dimensional tensor 𝒜 of size Q_1× Q_2×…× Q_M is said to admit a train decomposition if entries a_q_1,q_2,…,q_M of 𝒜 can be expressed as the sum of R_1R_2⋯ R_M-1 terms such that
a_q_1,q_2,…,q_M = ∑_r_1=1^R_1∑_r_2=1^R_2⋯∑_r_M-1=1^R_M-1g^(1)_q_1,r_1g^(2)_q_2,r_1,r_2⋯ g^(M)_q_M,r_M-1.
We call g^(1)_·,·, g^(2)_·,·,·, ⋯, g^(M)_·,· tensor train cores and R_1,R_2,…,R_M-1 the tensor train ranks. The order of dimensions in the tensor matters as the decomposition is performed sequentially from the first dimension g^(1)_q_1,r_1 to the last dimension g^(M)_q_M,r_M-1 by construction, and tensor trains cores of a certain dimension always depend on the cores of its previous dimension. Therefore it is important to arrange the data in such a tensor structure that the train decomposition is meaningful. See Section <ref> when we describe and analyze the Italian monthly cause specific mortality data. The tensor train decomposition has the theoretical advantages that it encompasses any specific tensor decomposition such as the canonical polyadic (CP) decomposition and the Tucker decomposition, but remains one of the most stable and simple approaches to summarize high-dimensional data by a limited number of latent variables, hence enabling straightforward interpretation of the results in application. We value these merits of the tensor train decomposition and employ it in our proposed model described as follows.
Suppose that we observe count data that can be arranged as a three-way discrete-valued tensor Y_i,t,k of dimension N× T× K and i=1,…,N, t=1,…,T, k=1,…,K. Additionally, we have information on covariates 𝐱_i,t,k∈ℝ^P and offsets u_i,t,k. Classical Poisson regression model assumes that
Y_i,t,k∼Pois(u_i,t,kexp(𝐱_i,t,k·β)).
For the linear Poisson regression model in (<ref>), it is straightforward to infer the relationship between the covariates and the dependent variable. In practice it is unwise to fit the data with a fully saturated model. A fully saturated model can certainly accounts for all possible interactions between observed covariates in linear form, however it requires to estimate the same number of parameters as the data dimension, which creates extra computational burden and hinders any meaningful interpretation of the results when the dimension becomes large. Including only a limited subset of covariates and their interactions is more feasible, however, the regression can potentially fail to account for residual variation in the observed counts Y_i,t,k. It may also be at the risk of bias induced by unobserved confounding variables. To address these issues, we propose to combine the current Poisson regression framework with Tensor Train Decomposition technique to form a new Poisson Regression Tensor Train Decomposition (BPRTTD) model so that we are able to extract more information from the data. The model extends the Poisson regression model with an extra rate parameter λ^*_i,t,k
Y_i,t,k∼Pois(u_i,t,kexp(𝐱_i,t,k·β)λ^*_i,t,k).
We assume that the rate λ^*_i,t,k can be expressed according to tensor train decomposition such that
λ^*_i,t,k = ∑_h_1=1^H_1λ^(1)_i,h_1∑_h_2^H_2λ^(2)_t,h_1,h_2λ^(3)_k,h_2
=λ^(1)'_i Λ^(2)_tλ^(3)_k,
where λ^(1)_i=(λ^(1)_i,1,…, λ^(1)_i,H_1)'∈ℝ_+^H_1, λ^(3)_k=(λ^(3)_k,1,…, λ^(3)_k,H_2)∈ℝ_+^H_2 and
Λ^(2)_t=[ λ^(2)_t,1,1 λ^(2)_t,1,2 … λ^(2)_t,1,H_2; λ^(2)_t,2,1 λ^(2)_t,2,2 … λ^(2)_t,2,H_2; ⋮ ⋮ … ⋮; λ^(2)_t,H_1,1 λ^(2)_t,H_1,2 … λ^(2)_t,H_1,H_2 ].
Here collection of matrices {λ^(1)_i}_i=1,…,N, {Λ_t^(2)}_t=1,…,T and {λ^(3)_k}_k=1,…,K are tensor train cores.
H_1 and H_2 are tensor train ranks and they control the model complexity. When H_1 and H_2 are small relative to N, T and K, this is a parsimonious representation of the rate tensor {λ^*_i,t,k}_i=1,…,N, t=1,…,T, k=1,…,K. Initially, the tensor has N· T · K parameters whereas the number reduces to N· H_1+T· H_1· H_2+K· H_2 after using the tensor decomposition representation.
When the data are Poisson counts and are treated as tensors, <cit.> and <cit.> applied CP decomposition and Tucker decomposition to enforce dimension reduction and obtain reliable statistical inferences. More recently, tensor train decomposition has gained more popularity. For instance, <cit.> proposed a content request prediction algorithm that employs tensor train decomposition. Motivated by existing literature, our method combines the classical Poisson regression model and the tensor train decomposition to fully utilize information in the data. Furthermore, since we are more oriented in explanatory analysis than predictive performance of the approach, we carefully specify the priors and choose the set of prior hyperparameters to avoid unidentifiable issues inherently to the general latent factor models.
§ PRIOR SPECIFICATION AND POSTERIOR INFERENCE
Due to the complex nature of the model space, we adapt a Bayesian approach to make inferences. Bayesian methods also provide the necessary uncertainty quantification. We impose gamma priors on {λ^(1)_i}_i=1,…,N, {Λ_t^(2)}_t=1,…,T and {λ^(3)_k}_k=1,…,K to exploit the congugate property of the Poisson parameters; that is
λ^(1)_i,h_1∼Ga(α_a, α_b), i=1,…,N , h_1=1,…,H_1,
λ^(2)_t,h_1,h_2∼Ga(β_a, β_b), t=1,…,T, h_1=1,…,H_1, h_2=1,…,H_2,
λ^(3)_k,h_2∼Ga(ϵ_a, ϵ_b), k=1,…,K, h_2=1,…,H_2.
Posterior inference on these parameters can be obtained by using Gibbs sampling algorithm conditionally on most recent values of other parameters. As for the Poisson regression coefficients β, we follow the literature and assume zero-mean normal priors such that
β_p∼𝒩(0, σ^2), p=1,…,P.
This completes the prior specification for the BPRTTD model. Figure <ref> illustrates the hierarchical graphical representation of the model together with the imposed priors.
Since normal priors on β are not conjugate, we sample β in an adaptive Metropolis-Hastings step that learns the posterior correlation between multivariate parameters <cit.>. We outline the MCMC algorithm in below.
§.§ Metropolis within Gibbs sampler
We employ a Gibbs sampler for λ_i,h_1, λ_t,h_1,h_2 and λ_k,h_2 given the Poisson regression coefficients β. The Gibbs sampling algorithm augments the state space with variable Y^h_1,h_2_i,t,k such that
Y^h_1,h_2_i,t,k∼Pois(u_i,t,kexp(𝐱_i,t,k·β)λ^(1)_i,h_1λ^(2)_t,h_1,h_2λ^(3)_k,h_2).
Utilizing the closeness under addition property of Poisson random variables, (<ref>) implies that
Y_i,t,k = ∑_h_1=1^H_1∑_h_2=1^H_2Y^h_1,h_2_i,t,k.
To draw Y_i,t,k^h_1,h_2 conditional on Y_i,t,k and λ^(1)_i,h_1, λ^(2)_t,h_1,h_2, λ^(3)_k,h_2, it suffices to note the relationship between the Poisson random variable and the Multinomial random variable, i.e.
(Y_i,t,k^1,1,Y_i,t,k^1,2,…,Y_i,t,k^H_1,H_2) ∼Multi(Y_i,t,k, (π_i,t,k^1,1,π_i,t,k^1,2,…,π_i,t,k^H_1,H_2))
with π_i,t,k^h_1,h_2=λ^(1)_i,h_1λ^(2)_t,h_1,h_2λ^(3)_k,h_2/∑_h_1=1^H_1∑_h_2=1^H_2λ^(1)_i,h_1λ^(2)_t,h_1,h_2λ^(3)_k,h_2. Other useful latent quantities for Gibbs sampler that follows are
Y^h_1,·_i,·,· = ∑_t=1^T∑_k=1^K∑_h_2=1^H_2Y^h_1,h_2_i,t,k
∼Pois(λ^(1)_i,h_1u_i,t,kexp(𝐱_i,t,k·β)∑_t=1^T∑_k=1^K∑_h_2=1^H_2λ^(2)_t,h_1,h_2λ^(3)_k,h_2),
Y^h_1,h_2_·,t,· = ∑_i=1^N∑_k=1^K Y^h_1,h_2_i,t,k
∼Pois(λ^(2)_t,h_1,h_2u_i,t,kexp(𝐱_i,t,k·β)∑_i=1^N∑_k=1^K λ^(1)_i,h_1λ^(3)_k,h_2),
Y^·,h_2_·,·,k = ∑_i=1^N∑_t=1^T∑_h_1=1^H_1 Y^h_1,h_2_i,t,k
∼Pois(λ^(3)_k,h_2u_i,t,kexp(𝐱_i,t,k·β)∑_i=1^N∑_t=1^T∑_h_1=1^H_1λ^(1)_i,h_1λ^(2)_t,h_1,h_2).
With these three auxiliary variables, it is easy to derive the full conditional distributions. To update λ_i,h_1, we draw samples from
λ_i,h_1 |·∼Ga(α_a+Y^h_1,·_i,·,·, α_b+u_i,t,kexp(𝐱_i,t,k·β)∑_t=1^T∑_k=1^K∑_h_2=1^H_2λ^(2)_t,h_1,h_2λ^(3)_k,h_2).
Similarly for λ_t,h_1,h_2 and λ_k,h_2, the full conditional distributions are
λ_t,h_1,h_2 |·∼Ga(β_a+ Y^h_1,h_2_·,t,·, β_b+u_i,t,kexp(𝐱_i,t,k·β)∑_i=1^N∑_k=1^K λ^(1)_i,h_1λ^(3)_k,h_2)
λ_k,h_2 |·∼Ga(ϵ_a+ Y^·,h_2_·,·,k, ϵ_b+u_i,t,kexp(𝐱_i,t,k·β)∑_i=1^N∑_t=1^T∑_h_1=1^H_1λ^(1)_i,h_1λ^(2)_t,h_1,h_2).
After updating λ_i,h_1, λ_t,h_1,h_2 and λ_k,h_2 in each iteration, β is sampled in a Metropolis-Hastings step with n-step proposal distribution
Q_n(β,·) = (1-p)𝒩(β, (2.38)^2Σ_n/d ) + p𝒩(β, (0.1)^2Σ/d),
where p is a small constant between 0 and 1, Σ_n is empirical estimate of the covariance matrix of the target posterior distribution based on the run so far and d is the dimension of β. Σ is a fixed covariance matrix and we take it to be the GLM estimate of the Poisson regression covariance matrix for efficiency.
§ SIMULATION STUDIES
We conduct two simulation studies to validate the BPRTTD model and the posterior sampling algorithm. In the first simulation study, we artificially simulate true parameters and use these parameters to generate the Poisson observations. Results are reported in the Appendices. However, the dimension of the simulated data is much smaller than what we encounter in the real data application (see Section <ref> for more detailed data description). The reason for this choice is that we are able to repeat the simulations multiple times. Another limitation is that the true parameters β is sampled from a arbitrary normal distribution, and λ_i^(1), i=1,…,N, Λ_t^(2), t=1,…,T, λ_k^(3), k=1,…,K are simulated from a gamma distribution with certain artificial shape and rate, which may not really reflect the typical real data scenario. To address these drawbacks, we design the second simulation study where true parameters are estimated from the real data under the BPRTTD model specification (see Section <ref> for steps regarding posterior inferences). After obtaining the estimates, which we treat as true parameters, we simulate offset from a gamma distribution with shape equal to 10^6 and rate equal to 1. The Poisson observations are then sampled according to (<ref>). We apply our approach to the simulated data and verify whether we are able to recover the true parameters in this high-dimensional and more realistic scenario. We report the summary statistics of absolute percentage error (APE) between the true parameters and their posterior mean estimates in Table <ref>. At least 75% of parameters β, λ_i,h_1^(1), λ_t,h_1,h_2^(2) and λ_k,h_2^(3) in the BPRTTD model are recovered within 40% deviation from the truth using our approach.
§ DRIVERS OF CAUSES OF DEATH IN ITALY FROM 2015 TO 2020
With the aim of understanding the shifting mortality patterns of COVID-19 as well as other causes of death prior to and during the pandemic outbreak, we apply our method to Italian official mortality data that records provisional monthly death counts based on the analysis of the declarations of the K=18 causes of death compiled by doctors for all deaths in Italy from January 2015 until December 2020, i.e., T=72 monthly death counts. Table <ref> shows the 18 causes of death under investigation. Furthermore, the death counts are aggregated in N=420 levels formed by 10 age groups, 2 genders and 21 Italian regions. In summary, we observe Y_i,t,k for i=1,…,N, t=1,…,T, k=1,…,K, in total 544,320 observations arrange in a N× T× K multiway-array. A more comprehensive description of the mortality data can be found on https://www.istat.it/it/archivio/240401.
Along with death counts Y_i,t,k, we also obtain covariates 𝐱_i,t,k. One important variable is the Italian Stringency Index (ISI) presented by <cit.> in the same spirit as the Oxford Stringency Index (OSI) introduced by <cit.>. The data set measures non-pharmaceutical interventions adopted by Italian authorities to tackle the COVID-19 pandemic at both the national and regional levels. Regional level stringency indices are desirable since mortality counts are collected according to regions. We look into interactions between the ISI and various causes of death, as suggested in literature that the pandemic can potentially result in differential consequences in other mortality causes. The other two groups of covariates that we include are interactions between age groups and causes of death as well as interactions between age groups and gender. It is well documented that age and gender are important risk factors for many causes of death. Female and male also demonstrate varying mortality patterns in different ages. These interaction terms in total result in 208 dimensional covariates 𝐱_i,t,k in the model. Lastly, the offsets u_i,t,j we include are days in each month, the reported monthly aggregated cases in each region for COVID-19 death category and population in each region for all other causes of death. Specifically for external causes of trauma and poisoning, we consider another offset that reflects the mobility level. The index we adapt is the Google COVID-19 Community Mobility Reports (Google LLC "Google COVID-19 Community Mobility Reports". https://www.google.com/covid19/mobility/). By adding the mobility offset into the Poisson rate, we model the change in mortality rate of external causes of death per fixed mobility unit. The remaining Poisson rate λ^*_i,t,k unaccounted for by the regression component is assumed to be has latent structure with H_1=6 and H_2=6. The choice of these two values is tested over varying combinations of H_1 and H_2 over grids defined by H_1=5,6,7,8 and H_2=5,6,7,8 and we use H_1=6 and H_2=6 to achieve the balance between reasonable model fitting and model complexity. The Gamma priors on λ^(1)_i,h_1, λ^(2)_t,h_1,h_2, λ^(3)_k,h_2 has parameters such that α_2 = 20, α_1 = √(1/(H1*H2))*α_2,
β_2 = 20, β_1 = √(1/(H1*H2))*β_2, ϵ_1 = 200, ϵ_2 = 200. For the Poisson regression coefficients β, we impose centered normal priors with variance 2. The MCMC iterations are 40,000.
§.§ Improvement of the BPRTTD model over the Poisson regression
First, we highlight what the additional tensor decomposition component contributes to fitting the Poisson rate by showing in Figure <ref> how our method complements the GLM estimates in recovering the observed variations in death counts Y_i,t,k. In these selected trajectories, we can see that the tensor decomposition component adjusts the naive GLM estimates to better follow the observed trajectories. For instance, GLM predicted values of death counts of male who reside in Lombardia and died of Tumors between age 80 to 84 are consistently under the observed ones; this is not surprising since GLM tends to estimate and fit with the average of all observations whereas Lombardia, as the most populated region in Italy, has in general larger values of death counts. Our method successfully makes up the gap between data and GLM estimates by amplifying the Poisson rates, as shown in Figure <ref>. In the case such as in Figure <ref> where the GLM estimates over predict, λ^*_i,t,k plays the role of downsizing the Poisson rate. Through the tensor decomposition assumption, such adjustments are done in a parsimonious manner. Recall that the saturated model requires in total 544,320 parameters whereas now except for the 208 coefficients, we add only N× H_1+T× H_1× H_2+K× H_2=5,220 more parameters to achieve great improvement in terms of model fitting. This advantage can also be seen when we calculate and compare the log-likelihood of simple Poisson regression versus our BPRTTD model, which are -862910.4 and -731919.9 respectively. Even though our approach provides further approximation to observations, it is still robust to outliers or abnormal records as the model specification exploits and leverages information from other data by introducing common shared latent classes. We demonstrate in Figure <ref> such a scenario where female mortality counts in age group 0-49 in Lazio in August 2016 show a sudden spike deviating from the normal pattern. The red BPRTTD line is not sensitive to such an outlier.
§.§ Interpretation of the Poisson regression component
We now make explanatory analysis on the Italian mortality data. We are primarily interested in discovering how other causes of death are affected by the government intervention policies. Three types of responses are inferred, positive, negative and no effects based on the criteria whether the 95% credible intervals of each coefficient are above 0, below 0 or contain 0. Mortality counts are positively associated with the ISI in the following death categories: 4. Diseases of the blood and hematopoietic organs and
some disorders of the immune system, 5. Endocrine, nutritional and metabolic diseases, 6. Psychic and behavioral disorders, 7. Diseases of the nervous system and sense organs, 9. Diseases of the respiratory system, 12. Diseases of the musculoskeletal system and connective tissue, 13. Diseases of the genitourinary system, 17. Symptoms, signs, abnormal results and ill-defined causes, 18. External causes of trauma and poisoning. The positive relationship between psychic and behavioral disorders, shown in Figure <ref>, is well documented in literature, affecting psychiatric patients as well as health population <cit.>. However, most studies report increasing levels of anxiety, acute stress disorders and so on, we offer new evidence that it actually translates to elevated mortality rates of psychic and behavioral disorders in the end. During the pandemic, individuals with psychiatric and behavioral disorders may face additional challenges due to disruptions in routine care, limited access to mental health services, increased stressors and social isolation. These factors can potentially contribute to adverse outcomes and exacerbate existing conditions. Another positive relationship we would like to comment on is between mortality due to respiratory system diseases and the ISI in Figure <ref>. Although there have been wide range of studies suggesting that people with certain lung diseases appear to have an increased risk at the height of the epidemic and these risk factors are important clinical predictors of severe COVID-19 to enable risk stratification and optimize resource allocation <cit.>, we discover that reversely the mortality rate of respiratory disease rises during COVID-19 lockdown despite the common observation that respiratory disease incidences declined due to public precautionary measures <cit.>. Several factors can jointly explain the positive relationship. For instance, lockdown measures can disrupt the routine care and monitoring of respiratory conditions, as a result, lack of timely interventions and preventive measures can contribute to a higher risk of mortality. Misclassification can also explain the increasing mortality. In the early pandemic, diagnosing the cause of death accurately can be complex especially when healthcare systems are under strain; limited testing capacity or availability of COVID-19 tests also potentialize deaths being attributed to respiratory diseases without confirming the presence of COVID-19. As for the mortality rate of external causes of trauma and poisoning in Figure <ref>, it may be contradicting to see that this also trended up as more intense lockdown measures were enforced. However, since we include mobility index in the offset that disentangles the negative effect of lockdown on population mobility from the total effect, we state that government intervention policies actually drive up mortality per mobility unit due to reasons such as delayed or reduced access to healthcare.
Negative correlations appear in 2. Some infectious and parasitic diseases, see Figure <ref>, 3. Tumors, see Figure <ref>, 8. Diseases of the circulatory system, 10. Diseases of the digestive system, 14. Complications of pregnancy, childbirth and the puerperium, 15. Some morbid conditions that originate in the perinatal period, 16. Congenital malformations and chromosomal anomalies. It has been observed that infectious and parasitic diseases caused less mortality when government interventions were more strict <cit.>. Measures such as lockdowns, travel restrictions, and social distancing, can help limit the spread of infectious diseases. By reducing contact between individuals, these measures can interrupt the transmission of infectious agents, thereby decreasing the overall incidence of infections and subsequent mortality. As for the decrease in tumor mortality rate, one possible explanation is the harvesting effect, also known as mortality displacement <cit.>. The harvesting effect refers to the phenomenon that individuals who are already vulnerable, in this case, tumor patients, experience accelerated deaths during the COVID-19 lockdown intervention, leading to a temporary decline in tumor mortality rates. However, this decline is expected to be followed by a period of increased mortality as those who would have died during the intervention succumb in the subsequent period. Figure <ref> shows the only category 11. Diseases of the skin and subcutaneous tissue that exhibits no statistically significant relationship with ISI. In Figure <ref>, we can also conclude the effect of gender and age on the hazard rates. In general, older population is associated with higher mortality in almost all types of death causes and men are more likely to die than women in the same age group. The exception is with tumors where men from certain younger age groups present higher mortality rates compared to women from older age group. It is also counter-intuitive to observe that the mortality rate due to external causes of trauma and poisoning is positively related to age. Even though it is confirmed in the data that the absolute death counts do go down with age, after taking into account the population size of each age group, the mortality rates per unit of population indeed increase with age, indicating that external causes of trauma and poisoning becomes more threatening when people get older. For detailed coefficient estimates, please refer to the Appendices.
§.§ Interpretation of the latent parameters
Three blocks of latent parameters are introduced in the BPRTTD model and they are arranged in a dependent structure; that is, each latent class λ^(1)_i,h_1, h_1=1,…,H_1 is characterized by different
λ^(2)_t,h_1,h_2, and furthermore h_2-specific λ^(3)_k,h_2, h_2=1,…,H_2. Therefore we approach the interpretation of latent parameters in an orderly manner. We start with the first block of latent parameters λ^(1)_i,h_1 that allocate demographic groups defined by Italian regions, gender and age groups into H_1 latent classes. Table <ref> in the Appendices shows the posterior mean estimates of λ^(1)_i,h_1 and we highlight in red values above the mean α_1/α_2 of the Gamma prior distribution. Note that since we already have a Poisson regression component that accounts for global linear relationships between covariates and death rates, what we see in estimates of λ^(1)_i,h_1 indicates differential local effects of higher order interactions between covariates on mortality rates unexplained by linear regression. It is clear that although latent classes labeled by h_1=1 and h_1=4 represent majorly female and male mortality patterns respectively, they appear to be geographical dependent. For instance, almost all female age groups, except for older female (age group 85+) from southern Italy (Molise, Campania, Apulia, Basilicata, Calabria, Sicily) show elevated weights in latent class h_1=1 in Table <ref>, whereas the same older female southern Italian population shares similar mortality patterns with almost all male groups, excluding those in northern Italy (Piemonte, Valle d’Aosta, Lombardia, Veneto, Friuli-Venezia Giulia, Emilia-Romagna) as shown in Table <ref>. Latent class h_1=6, on the other hand, suggests old male and young female share something in common in their causes of death over time captured by λ^(2)_t,6,h_2 and λ^(3)_k,h_2. The remaining three latent classes indexed by h_1=2,3,5 are less related to age and gender but show more geographical dependence. So before we move on to the analysis of latent parameters λ^(2)_t,h_1,h_2 and λ^(3)_k,h_2, we make another attempt to decipher the local joint effect of regions, age and gender. To do this, we first rearrange the posterior mean estimates of λ^(1)_i,h_1, i=1,…,N, h_1=1,…,H_1 into a new matrix of dimension 21× (2× 10× H_1) where 21 is the number of Italian regions, 2 and 10 are gender and age groups. Then we treat 21 Italian regions as observations, gender, age groups as well as H_1 latent classes as features, apply the partitioning around medoids (PAM) algorithm to classify Italian regions based on features. Optimal number of clusters is 4 according to the elbow method. The clustering algorithm confirms previous observations. Figure <ref> shows that northern Italy plus Toscana, Umbria, Marche is classified in a different group from southern Italy, plus Lazio and excluding Campania, Calabria, Sicily. Although the two clusters have similar weights in latent class h_1=6, the differences mainly exist in latent class h_1=1 for female population in the northern Italy cluster and h_1=4 for older female in the southern Italy cluster. The PAM algorithm also separate the conventional classification of southern Italy further into two groups that exhibit homogeneous behavior when looking at latent class h_1=4 and h_1=6, but differ quite substantially in latent class h_2, see Figure <ref>. Lastly, Liguria is singled out to form its own cluster because latent class h_1=2 plays a rather significant role in defining the mortality pattern over time in the region.
We proceed to analyze together the second layer latent parameters λ^(2)_t,h_1,h_2 and the third layer latent parameters λ^(3)_k,h_2 as they jointly identify the corresponding latent classes labeled by h_1. λ^(2)_t,h_1,h_2 is the block of parameters associated with time indices T, so we display the posterior mean estimates of λ^(2)_t,h_1,h_2 in terms of trajectories evolving over time in Figure <ref>; on the other hand, λ^(3)_k,h_2 utilizes H_2 latent structures to summarizes 18 causes of death as shown in Figure <ref>. We begin with latent class h_1=1 shown in Figure <ref> that is significant for almost all female age groups except for older population in southern Italy. Two trajectories are more relevant in this class, and they are characterized by mortality rates λ^(3)_k,5 in Figure <ref> and λ^(3)_k,6 in Figure <ref>. λ^(3)_k,5 mostly captures COVID-19 mortality and the trajectory λ_t,1,5^(1) in Figure <ref> indicates a sudden weight spike of this particular latent class h_2=5 around June 2020. This is when the pandemic situation eases between the first wave and the second wave so the daily new cases are almost single digits; the time lag between contracting COVID-19 in the previous wave and dying of COVID-19 potentially results in the spike that we observe. We will see later another type of weight spike with respect to latent class h_2=5. λ_t,1,5^(1) is also active from January 2015 to July 2016. However, since during this period the new cases offset in the BPRTTD model is exactly 0, the dominating factor is no longer COVID-19 death, but possibly the other cause in λ^(3)_k,5 higher than the prior mean, namely some infectious and parasitic diseases. Trajectory λ_t,1,6^(1) has opposite behavior as λ_t,1,5^(1); that is, it is squeezed out when the latter is high and bounces when the latter is low. When we inspect λ_k,6^(3) in Figure <ref>, the following causes of death have rising weights including 2. Some infectious and parasitic diseases, 6. Psychic and behavioral disorders, 7. Diseases of the nervous system and sense organs, 10. Diseases of the digestive system, 11. Diseases of the skin and subcutaneous tissue and 12. Diseases of the musculoskeletal system and connective tissue. In the Poisson regression component, we observe positive global main effect of COVID lockdown measures on the mortality rate of psychic and behavioral disorders, however, the squeezing phenomenon does not contradict our previous arguments; in fact, since we are discussing latent class h_1=1 crucial to female population except for older ones in southern Italy, it actually suggests a local compensation effect specific to this demographic group.
We have commented beforehand that latent class h_1=2 are unique to three southern Italian regions, Campania, Calabria and Sicily and now we see that the determining trajectory λ_t,2,1^(2) in Figure <ref> has high estimated rates in causes of death 5. Endocrine, nutritional and metabolic diseases as well as 17. Symptoms, signs, abnormal results and ill-defined causes in Figure <ref>. It shows strong seasonality with peaks in both winter and summer. Endocrine, nutritional and metabolic diseases have been documented to be related to winter holidays <cit.> and heat exposure <cit.>. On the other hand, the seasonality of symptoms, signs, abnormal results and ill-defined causes in these three regions may consist of misclassified deaths related to seasonal illnesses. Latent class h_1=3 is almost exclusively explanatory for female older than 85 years old in northern Italy and some male age groups in the south. The class portraits a pattern where COVID-19 mortality rate goes through two spikes in June 2020 and October 2020 in Figure <ref>. As stated in the previous paragraph, the spike in the end of the first wave is possibly due to the lag between contracting and death of COVID-19; the spike in mortality rate in October anticipates the strike of second COVID-19 wave. This can be the outcome of many factors, for instance, even though Italy has gone through the first wave, in face of second wave, testing and reporting of COVID-19 cases are still insufficient, leading to an underestimating of real case number. The health system is also not thoroughly prepared to combat the much intenser comeback of COVID-19 in the coming fall and winter. We distinguish two types of displacements between case peak and mortality peak. The first type is usually seen after a previous COVID-19 wave and it is due to the time lag between contracting COVID and final death whereas the second type predicts the incoming COVID-19 hit, which is particular true in 2020 when the society and the health system are seriously under prepared to tackle the pandemic. Additionally recall that this represents local effects for female older than 85 years old in northern Italy and certain male age groups in the south, suggesting that underpreparedness is particular detrimental to those people. We also notice that the trajectories of all other causes of death are crowded out by λ_t,3,5^(2) in 2020, offering evidences to the hypothesis that potential harvesting effect exists. Next latent class h_1=4 underlies the mortality composition of young male Italian population in the north, all male and female population in the south. The essential feature of this class is the downward trend of trajectory λ_t,4,3^(2) displayed in Figure <ref>. A closer look at Figure <ref> reveals that 5. Endocrine, nutritional and metabolic diseases, 8. Diseases of the circulatory system and 18. External causes of trauma and poisoning are the three causes that define the mortality structure in λ^(3)_k,3. The trajectories indicates these three mortality causes tend to be seasonal. Although we have briefly commented on the seasonality of mortality due to endocrine, nutritional and metabolic diseases observed in Campania, Calabria and Sicily, we elaborate on the fact that the seasonality is distinct with young male Italian population in the north, all male and female population in the south except for the three regions just mentioned. The spikes are generally less drastic in the second demographic group. For instance, when heatwave hits Campania, Calabria and Sicily in the summer of 2017, causing noticeable increase in the number of people dying of endocrine, nutritional and metabolic diseases, the situation is less severe in the north. Another observation worth pointing out is that endocrine, nutritional and metabolic diseases are more lethal for older female population as indicated in Table <ref> and Table <ref> in the Appendices. Diseases of the circulatory system are causes of death whose seasonality has been widely studies as well and our findings concur with previous findings in the literature <cit.>. Lastly, the seasonality of external causes of trauma and poisoning may largely be contributed to increasing traffic accidents in the winter and outdoor activities in the summer.
Latent class h_1=5 in Figure <ref>, which is primarily significant for both male and female in northern Italy, has two major attributes. One is that trajectory λ_t,5,2^(2) representing mortality rate of 9. Diseases of the respiratory system shows an abnormal spike around March and April 2020 when health system is overwhelmed in northern Italy and many COVID-19 deaths are mis-classified. Similar argument has been made when we interpret the coefficients of Poisson regression component of the BPRTTD model. The other attribute that characterizes the latent class is λ_t,5,5^(2) with its two peaks first in February and and then July 2020. Both types of displacement of COVID mortality rate appear. Almost all male between the age 50 and 89 and female between 70 and 94 in northern Italy except for Veneto and Friuli-Venezia Giulia experience the second type of displacement and are subjects to elevated mortality rate of dying of COVID-19 in the beginning of first wave (February and March 2020). On the contrary, the second type occurs to older female population in northern Italy and certain male age groups in the south only in the beginning of second wave as previously illustrated. We close the analysis by commenting on latent class h_1=6 shown in Figure <ref> which features constant trend of λ^(2)_t,6,4 defined mostly by tumor and respiratory diseases shown in Figure <ref>. Another relevant trajectory λ^(2)_t,6,2 captures expected seasonality of respiratory disease deaths. This is the mortality structure shared by male older population and female population under 69 across almost all Italian regions.
§ SUMMARY AND FUTURE WORK
In this paper, we propose to model Poisson count data using the BPRTTD model. The model comprises two parts, the first part is the Poisson regression model. In the second part, the data are organized as tensor and we apply tensor train decomposition to estimate latent parameter space for explanatory purposes. The Bayesian inference framework is validated in two simulation studies and then applied to the Italian monthly causes specific mortality data from January 2015 to December 2020. The regression component leverages information in covariates and we are able to identify causes of death that are positively, negatively and not related to government interventions during the COVID-19 pandemic. We also discover the joint effects of age, gender and causes of death on mortality rate via the tensor decomposition component that compensates what the Poisson regression fails to account for. It enables a further stratification of demographic profiles characterized jointly by geographical location, gender and age based on their unique dynamic mortality structures over the time span. Regional classification are made and the results coincide with conventional conception. COVID-19 related consequences are also revealed in the latent parameters. Several causes of death, including infectious and parasitic diseases and psychic and behavioral disorders, compete with COVID-19 mortality among specific demographic groups.
In the BPRTTD model, we have not fully exploit the spatial-temporal information in the data. For instance, instead of applying clustering algorithms to the posterior estimates, one can introduce reasonable distance measures and utilize geographic locations encoded in λ_i,h_1^(1) when specifying the model. λ_t,h_1,h_2^(2 can also be modeled in a time series framework so that the temporal dependence can be inferred. Another possible future direction is the proper choice of tensor train ranks in the BPRTTD model plays an important role in controlling the model complexity. The model selection can be accomplished by calculating marginal likelihoods over a pre-specified grids defined by tensor train ranks. Due to the increased computational burden this solution would require, we leave its exploration to future work.
apalike
§ APPENDICES
§ SIMULATION STUDY WHERE PARAMETERS ARE ARTIFICIAL GENERATED
In the first experiment, we simulate count data from Poisson distribution with rate parameters generated according to the BPRTTD model. In this step, we fix N=20,T=20 and K=20 and the Tensor Train Decomposition rank H_1=H_2=5. We include one intercept plus P=5 covariates whose regression coefficients β are sampled from a normal distribution with mean 0 and variance 0.1. λ^(1)_i,h_1 are generated from a Gamma distribution with a_α=1 and b_α=2.8. λ^(2)_t,h_1,h_2 and λ^(3)_k,h_2 are simulated from the same Gamma distribution as well. Then with fixed parameter values, we generate covariates 𝐱_i,t,k from a standard normal distribution, offset u_i,t,k from a Gamma distribution with shape and rate equal to 5 and 1. We repeat the simulation for 100 times. In each repetition, the observed data are simulated according to (<ref>). Finally, we apply the BPRTTD model to the simulated Y_i,t,k. In this step, we assume that true latent dimension H_1 and H_2 are known and we set up the parameters of the prior distribution according to the following values, α_a=1, α_b=1, β_a=1, β_b=2, ϵ_a=1, ϵ_b=1. The prior variance of β is 0.1. The probability of proposing from in the proposal distribution of the adaptive Metropolis-Hastings algorithm to update regression coefficients p=0.05. We run the MCMC for 10,000 iterations, discard the first 3,000 iteration. Results of comparison between true parameter values and the estimated ones are shown in Table <ref> and tb:lambda1tb:lambda3.
In <ref>, coefficients associated with simulated covariates 𝐱_i,t,k are estimated accurately with small standard deviations over 100 repetition of simulation studies. The estimated intercept has a higher standard deviation. This is due to the identifiability issue associated with the intercept and λ^(1)_i,h_1, λ^(2)_t,h_1,h_2, λ^(3)_k,h_2 inherent to the BPRTTD model as these parameters multiply and contribute to the Poisson rate. Careful choice of prior parameters helps overcome the identifiability problem and facilitate our goal to interpret factors λ^(1)_i,h_1, λ^(2)_t,h_1,h_2, λ^(3)_k,h_2. In fact, tb:lambda1tb:lambda3 show how these parameters are recovered using our method. For λ^(1)_i,h_1, the difference between the 100 true values and their estimates by posterior means has mean 0.0148. This number is 0.0562 and -0.0675 for λ^(2)_t,h_1,h_2 and λ^(3)_k,h_2 respectively, validating our approach's ability to recover parameters for further analysis.
§ POISSON REGRESSION COEFFICIENTS
The following table displays the posterior mean estimates of the Poisson regression coefficients in the BPRTTD model as well as the 95% credible intervals.
!
1cCoefficients Posterior mean 95% CI lower bound 95% CI upper bound 1cCoefficients Posterior mean 95% CI lower bound 95% CI upper bound 1cCoefficients Posterior mean 95% CI lower bound 95% CI upper bound 1cCoefficients Posterior mean 95% CI lower bound 95% CI upper bound
Intercept -16.6495 -16.6527 -16.6462 causes8:age50-59 0.8084 0.8063 0.8110 causes9:age70-74 1.9028 1.9003 1.9054 causes10:age85-89 1.9912 1.9871 1.9961
isi 0.0622 0.0622 0.0623 causes9:age50-59 0.6622 0.6597 0.6653 causes10:age70-74 1.0739 1.0697 1.0788 causes11:age85-89 3.5340 3.5208 3.5482
causes2 -0.6237 -0.6263 -0.6212 causes10:age50-59 0.8108 0.8066 0.8155 causes11:age70-74 1.8440 1.8273 1.8612 causes12:age85-89 2.2547 2.2510 2.2590
causes3 1.8678 1.8639 1.8717 causes11:age50-59 0.9046 0.8903 0.9187 causes12:age70-74 1.1411 1.1357 1.1473 causes13:age85-89 3.8594 3.8573 3.8618
causes4 -2.0146 -2.0214 -2.0086 causes12:age50-59 0.5281 0.5240 0.5331 causes13:age70-74 2.0869 2.0844 2.0897 causes14:age85-89 -3.1789 -4.1169 -2.3403
causes5 -0.5788 -0.5821 -0.5758 causes13:age50-59 0.9430 0.9406 0.9456 causes14:age70-74 -5.2264 -6.4161 -4.2635 causes15:age85-89 -6.4308 -6.4654 -6.3910
causes6 -1.4547 -1.4584 -1.4512 causes14:age50-59 -3.1612 -3.1970 -3.1241 causes15:age70-74 -6.3054 -6.3257 -6.2844 causes16:age85-89 -2.1175 -2.1227 -2.1124
causes7 -0.2957 -0.2997 -0.2917 causes15:age50-59 -4.1632 -4.1697 -4.1555 causes16:age70-74 -2.0003 -2.0094 -1.9908 causes17:age85-89 1.3042 1.3003 1.3087
causes8 1.0847 1.0827 1.0866 causes16:age50-59 -0.8207 -0.8271 -0.8139 causes17:age70-74 -0.4288 -0.4344 -0.4225 causes18:age85-89 0.2321 0.2288 0.2358
causes9 -0.6687 -0.6706 -0.6667 causes17:age50-59 -0.2131 -0.2152 -0.2105 causes18:age70-74 -1.0744 -1.0783 -1.0699 causes2:age90-94 2.8457 2.8421 2.8501
causes10 -0.2185 -0.2217 -0.2157 causes18:age50-59 -0.6158 -0.6190 -0.6121 causes2:age75-79 1.1387 1.1349 1.1432 causes3:age90-94 2.1375 2.1334 2.1422
causes11 -4.3441 -4.3581 -4.3305 causes2:age60-64 0.7662 0.7627 0.7700 causes3:age75-79 1.4602 1.4556 1.4656 causes4:age90-94 3.0483 3.0421 3.0555
causes12 -2.3376 -2.3408 -2.3349 causes3:age60-64 1.5958 1.5922 1.5999 causes4:age75-79 0.8348 0.8273 0.8435 causes5:age90-94 3.6021 3.5989 3.6061
causes13 -2.4258 -2.4266 -2.4250 causes4:age60-64 0.4112 0.4077 0.4152 causes5:age75-79 1.8020 1.7975 1.8074 causes6:age90-94 4.7759 4.7716 4.7809
causes14 -4.3635 -4.3723 -4.3543 causes5:age60-64 1.4999 1.4961 1.5044 causes6:age75-79 1.8753 1.8706 1.8808 causes7:age90-94 3.1840 3.1801 3.1885
causes15 -0.2488 -0.2517 -0.2457 causes6:age60-64 0.8541 0.8488 0.8599 causes7:age75-79 1.6761 1.6709 1.6823 causes8:age90-94 4.2905 4.2882 4.2933
causes16 -0.5248 -0.5302 -0.5194 causes7:age60-64 0.9691 0.9661 0.9725 causes8:age75-79 1.9453 1.9419 1.9492 causes9:age90-94 4.4775 4.4752 4.4803
causes17 -0.0979 -0.0999 -0.0958 causes8:age60-64 1.4578 1.4562 1.4597 causes9:age75-79 2.3313 2.3286 2.3347 causes10:age90-94 2.8103 2.8061 2.8152
causes18 -3.1404 -3.1431 -3.1377 causes9:age60-64 1.5201 1.5192 1.5213 causes10:age75-79 1.2609 1.2573 1.2653 causes11:age90-94 4.6064 4.5920 4.6218
age50-59 1.1601 1.1568 1.1628 causes10:age60-64 1.2630 1.2594 1.2670 causes11:age75-79 2.2776 2.2641 2.2915 causes12:age90-94 3.2598 3.2566 3.2637
age60-64 1.2391 1.2361 1.2417 causes11:age60-64 1.6019 1.5821 1.6238 causes12:age75-79 1.4373 1.4326 1.4431 causes13:age90-94 4.8899 4.8881 4.8919
age65-69 1.6878 1.6830 1.6921 causes12:age60-64 1.1331 1.1290 1.1377 causes13:age75-79 2.5700 2.5670 2.5734 causes14:age90-94 -11.6281 -12.6193 -10.6308
age70-74 2.2840 2.2790 2.2883 causes13:age60-64 1.7587 1.7559 1.7619 causes14:age75-79 -2.6592 -3.4425 -2.0250 causes15:age90-94 -7.1391 -8.4802 -5.9674
age75-79 2.6251 2.6202 2.6292 causes14:age60-64 -3.9336 -5.1843 -2.8401 causes15:age75-79 -6.8405 -6.8979 -6.7823 causes16:age90-94 -1.4710 -1.4774 -1.4636
age80-84 3.0926 3.0875 3.0970 causes15:age60-64 -4.5475 -4.5634 -4.5321 causes16:age75-79 -2.3272 -2.3337 -2.3197 causes17:age90-94 2.8321 2.8292 2.8355
age85-89 3.1920 3.1873 3.1962 causes16:age60-64 -0.5220 -0.5274 -0.5170 causes17:age75-79 -0.2369 -0.2406 -0.2326 causes18:age90-94 1.2644 1.2612 1.2682
age90-94 2.9043 2.9002 2.9077 causes17:age60-64 -0.0207 -0.0212 -0.0202 causes18:age75-79 -0.8145 -0.8179 -0.8105 causes2:age95+ 4.0183 4.0125 4.0251
age95+ 2.0729 2.0680 2.0772 causes18:age60-64 -0.5913 -0.5937 -0.5886 causes2:age80-84 1.4811 1.4771 1.4859 causes3:age95+ 3.0521 3.0477 3.0572
sexmale 0.5431 0.5423 0.5437 causes2:age65-69 0.7445 0.7424 0.7470 causes3:age80-84 1.4177 1.4130 1.4231 causes4:age95+ 4.5687 4.5607 4.5781
isi:causes2 -0.0642 -0.0642 -0.0642 causes3:age65-69 1.5623 1.5575 1.5677 causes4:age80-84 1.1926 1.1850 1.2016 causes5:age95+ 4.9195 4.9142 4.9259
isi:causes3 -0.0634 -0.0634 -0.0634 causes4:age65-69 0.4822 0.4714 0.4948 causes5:age80-84 2.0797 2.0753 2.0849 causes6:age95+ 6.3040 6.2993 6.3093
isi:causes4 -0.0618 -0.0619 -0.0618 causes5:age65-69 1.5700 1.5653 1.5754 causes6:age80-84 2.6113 2.6070 2.6165 causes7:age95+ 4.2865 4.2815 4.2923
isi:causes5 -0.0607 -0.0607 -0.0607 causes6:age65-69 0.8514 0.8467 0.8570 causes7:age80-84 1.9817 1.9770 1.9872 causes8:age95+ 5.8169 5.8139 5.8205
isi:causes6 -0.0614 -0.0614 -0.0613 causes7:age65-69 1.1071 1.1011 1.1139 causes8:age80-84 2.3790 2.3758 2.3828 causes9:age95+ 5.9212 5.9184 5.9248
isi:causes7 -0.0613 -0.0613 -0.0612 causes8:age65-69 1.5204 1.5172 1.5241 causes9:age80-84 2.7303 2.7271 2.7341 causes10:age95+ 4.0414 4.0375 4.0459
isi:causes8 -0.0634 -0.0635 -0.0634 causes9:age65-69 1.7369 1.7336 1.7406 causes10:age80-84 1.4668 1.4625 1.4719 causes11:age95+ 6.2110 6.1931 6.2306
isi:causes9 -0.0611 -0.0611 -0.0610 causes10:age65-69 1.1631 1.1590 1.1679 causes11:age80-84 2.7364 2.7226 2.7509 causes12:age95+ 4.7972 4.7923 4.8030
isi:causes10 -0.0636 -0.0636 -0.0636 causes11:age65-69 1.7110 1.7003 1.7220 causes12:age80-84 1.7016 1.6972 1.7068 causes13:age95+ 6.2951 6.2933 6.2972
isi:causes11 -0.0622 -0.0623 -0.0622 causes12:age65-69 1.2160 1.2120 1.2207 causes13:age80-84 3.0390 3.0357 3.0426 causes14:age95+ -1.3673 -2.8763 -0.0799
isi:causes12 -0.0613 -0.0613 -0.0613 causes13:age65-69 1.9535 1.9495 1.9580 causes14:age80-84 -7.7412 -8.3177 -7.1522 causes15:age95+ -4.7655 -4.7879 -4.7441
isi:causes13 -0.0613 -0.0613 -0.0612 causes14:age65-69 -2.9935 -4.1278 -2.0717 causes15:age80-84 -6.4995 -6.5073 -6.4923 causes16:age95+ -0.3515 -0.3545 -0.3483
isi:causes14 -0.0676 -0.0678 -0.0673 causes15:age65-69 -5.0895 -5.1082 -5.0735 causes16:age80-84 -2.4258 -2.4336 -2.4168 causes17:age95+ 4.9397 4.9366 4.9433
isi:causes15 -0.0648 -0.0648 -0.0648 causes16:age65-69 -1.3104 -1.3187 -1.3012 causes17:age80-84 0.2797 0.2759 0.2843 causes18:age95+ 2.6959 2.6925 2.6996
isi:causes16 -0.0628 -0.0629 -0.0628 causes17:age65-69 -0.2727 -0.2760 -0.2689 causes18:age80-84 -0.4742 -0.4781 -0.4696 age50-59:sexmale 0.0042 0.0036 0.0049
isi:causes17 -0.0547 -0.0547 -0.0547 causes18:age65-69 -0.9068 -0.9110 -0.9019 causes2:age85-89 2.0728 2.0693 2.0770 age60-64:sexmale 0.0656 0.0653 0.0660
isi:causes18 -0.0598 -0.0598 -0.0598 causes2:age70-74 0.7922 0.7892 0.7956 causes3:age85-89 1.6626 1.6577 1.6679 age65-69:sexmale 0.0853 0.0850 0.0857
causes2:age50-59 0.5114 0.5091 0.5141 causes3:age70-74 1.4096 1.4046 1.4152 causes4:age85-89 1.9703 1.9619 1.9802 age70-74:sexmale 0.0579 0.0575 0.0584
causes3:age50-59 0.9742 0.9713 0.9776 causes4:age70-74 0.4786 0.4725 0.4857 causes5:age85-89 2.6874 2.6836 2.6918 age75-79:sexmale -0.0080 -0.0085 -0.0074
causes4:age50-59 0.0313 0.0236 0.0400 causes5:age70-74 1.5377 1.5339 1.5421 causes6:age85-89 3.5980 3.5931 3.6036 age80-84:sexmale -0.1175 -0.1184 -0.1164
causes5:age50-59 0.7903 0.7860 0.7953 causes6:age70-74 1.1036 1.0975 1.1105 causes7:age85-89 2.5000 2.4955 2.5050 age85-89:sexmale -0.2113 -0.2119 -0.2106
causes6:age50-59 0.3711 0.3687 0.3741 causes7:age70-74 1.2139 1.2093 1.2192 causes8:age85-89 3.1934 3.1904 3.1970 age90-94:sexmale -0.3004 -0.3014 -0.2994
causes7:age50-59 0.3307 0.3275 0.3344 causes8:age70-74 1.5491 1.5456 1.5532 causes9:age85-89 3.4762 3.4733 3.4795 age95+:sexmale -0.4041 -0.4049 -0.4031
Posterior mean estimates β̂ of the BPRTTD model with 95% credible intervals.
§ LATENT PARAMETERS
The following table displays the posterior mean estimates of the tensor train decomposition parameters λ^(1)_i,h_1 in the BPRTTD model.
|
http://arxiv.org/abs/2307.05542v2 | 20230708193421 | Geometric parametrization of $SO(D+1)$ phase space of all dimensional loop quantum gravity: II. Beyond the simplicity constraint surface | [
"Gaoping Long"
] | gr-qc | [
"gr-qc"
] |
[
Sungsoo Ray Hong
====================
The regularization of the scalar constraint and the Fermion coupling problem indicate that it is necessary to consider some kind of gauge fixing methods to deal with the simplicity constraint in all dimensional SO(D+1) loop quantum gravity. The coherent state with well-behaved peakedness property is an essential ingredient to carry out the gauge fixing method. To provide the basic tool for constructing such kind of coherent state, we generalize the twisted geometry parametrization of the SO(D+1) holonomy-flux phase space of (1+D)-dimensional loop quantum gravity from the edge simplicity constraint surface to a dense subspace in the SO(D+1) holonomy-flux phase space. The symplectic structure on the twisted geometric parameter space and the Poisson structure in terms of the twisted geometric variables are analyzed. Besides, we discuss the relation between the two twisted geometry parametrizations constructed respectively on the
edge simplicity constraint surface and the dense subspace of the SO(D+1) holonomy-flux phase space. Our result show that these two type of parametrizations are equivalent to each other by carrying out the gauge reduction with respect to the edge simplicity constraint.
§ INTRODUCTION
As a non-perturbative and background-independent approach to unify general relativity (GR) and quantum
mechanics, loop quantum gravity (LQG) has made remarkable progresses in several aspects <cit.><cit.><cit.><cit.>. For instance, various symmetry-reduced models are established in the framework of LQG to give the resolution of singularities <cit.>, and various attempts are made in the framework of
LQG to account for the BH entropy <cit.>.
Loop quantum gravity in all dimensional spacetime is also concerned since its potential for absorbing the valuable ideas (e.g. super symmetries and extra dimensions <cit.>) in other gravity theories to the loop quantization framework of GR. The loop quantization approach for GR in all dimensions is first developed by Bodendorfer, Thiemann and Thurn <cit.><cit.><cit.>. In detail, the all dimensional LQG is based on the connection formulation of (1+D) dimensional GR in the form of the SO(D+1) Yang-Mills theory, with the kinematic phase space coordinatized by the canonical pairs (A_aIJ,π^bKL), consisting of the spatial SO(D+1) connection fields A_aIJ and the vector fields π^bKL. In this formulation, the theory is governed by the first class system of the SO(D+1) Gaussian constraints, the (D+1)-dimensional ADM constraints and the additional simplicity constraints. Similar to the Gaussian constraints, the simplicity constraints taking the form S^ab_IJKL:=π^a[IJπ^|b|KL] generate extra gauge symmetries in the SO(D+1) Yang-Mills phase space. It has been shown that the connection phase space correctly reduces to the familiar ADM phase space by carrying out the symplectic reductions with respected to the Gaussian and simplicity constraints. Similar to the case of the SU(2) LQG, the loop quantization of the SO(D+1) Yang-Mills theory leads to the spin-network states of the SO(D+1) holonomies on some graphes, which carry the quanta of the flux operators representing the fluxes of π^bKL over some (D-1)-dimensional faces. The Hilbert space composed by the spin-network states indicates the holonomy-flux phase space associated to each graph, with the Poisson algebras among holonomies and fluxes in the holonomy-flux phase space being isomorphic to the quantum algebras among them in the quantum Hilbert space. To look for the all-dimensional Regge ADM data encoded in the SO(D+1) spin-network states, it is necessary to find the degrees of freedom of discrete geometries encoded in the SO(D+1) holonomy-flux variables, by considering a gauge reduction procedure with respect to both of the SO(D+1) Gaussian constraints and the simplicity constraints in the holonomy-flux phase space.
A series of studies in this direction is first carried out in the SU(2) formulation of (1+3)-dimensional LQG <cit.><cit.><cit.><cit.><cit.>, and then they are generalized to the SO(D+1) holonomy-flux phase space in our companion paper <cit.>. Specifically, since the simplicity constraints become anomalous at the vertices of the graphs, the reductions with respect to the Gaussian and simplicity constraints are guided by the twisted geometry parametrization of the edge simplicity constraint surface in the holonomy-flux phase space of SO(D+1) LQG.
Especially, the twisted geometry interpretation of holonomy-flux variables suggests that the Gaussian and edge simplicity constraints should be imposed strongly since they generate true gauge transformations, while the vertex simplicity constraints should be imposed weakly. The reduced space parametrized by the twisted geometric parameters give a discrete Regge geometry picture, which can be regarded as the discrete version of the ADM phase space of GR.
An important application of the twisted geometry parametrization is the construction of the twisted geometry coherent state. Such kind of coherent states is firstly established in SU(2) LQG <cit.>, and then it is generalized to the SO(D+1) LQG with the restriction of the simple representations <cit.>. Specifically, based on the twisted geometry parameters, the simple twisted geometry coherent state in the strong solution space of quantum edge simplicity constraints is established by selecting the dominant terms (which is referred to as Perelomov type coherent state <cit.>) with simple representation of SO(D+1) in the decomposition of the heat-kernel coherent state of SO(D+1) <cit.>. It has been shown that the simple twisted geometry coherent states take the Gaussian superposition formulations. Especially, the simple twisted geometry coherent states provides an over-complete basis of the strong solution space of quantum edge simplicity constraints, and their wave functions have well-behaved peakedness and Ehrenfest properties in the reduced phase space with respect to the edge simplicity constraints <cit.>.
In fact, the twisted geometry parametrization of the SO(D+1) holonomy-flux phase space discussed in Ref.<cit.> concerns the issues on the constraint surface of edge simplicity constraint, and the resulted twisted geometry variables only give the parametrization of the reduce phase space with respect to edge simplicity constraint. Correspondingly, the simple twisted geometry coherent states constructed based on the twisted geometry parametrization of the reduce phase space are the gauge (with respect to edge simplicity constraint) invariant coherent states <cit.>. In other words, the wave functions of these gauge (with respect to edge simplicity constraint) invariant coherent states are constants along the corresponding gauge orbits, so that each of them peaks at a gauge orbit instead of a point in the phase space <cit.>.
As we have mentioned above, the edge simplicity constraint should be imposed strongly following the twisted geometry interpretation of holonomy-flux variables. Thus, it seems that all of the studies for all dimensional SO(D+1) LQG can be completed in the strong solution space of quantum edge simplicity constraint, which is the gauge (with respect to simplicity constraint) invariant subspace of the full Hilbert space of all dimensional SO(D+1) LQG. Nevertheless, several discussions has shown that it is necessary to consider some kind of gauge fixed solution space with respect to simplicity constraint, to deal with some of the issues appeared in the all dimensional SO(D+1) LQG.
Let us introduce two issues to explain this necessity. First, the regularization of the scalar constraint can be carried out by following the standard loop regularization method <cit.><cit.><cit.>. The resulted regularized scalar constraint contains the Euclidean term which is given by the antisymmetric contraction of the holonomies along some closed loops and the fluxes at the beginning and target point of these loops. Classically, this Euclidean term captures the information of both of the intrinsic and extrinsic curvature along these closed loops. However, it is shown that the Euclidean term in the quantized scalar constraint can not capture the information of those intrinsic and extrinsic curvature in the strong solution space of quantum edge simplicity constraint, since the strong imposition of quantum edge simplicity constraint leads to the gauge averaging, which vanishes some critical ingredients in the holonomies <cit.>. Thus, the standard loop regularization method is conflict to the strong imposition of the edge simplicity constraint. To deal with issue, one may consider the gauge fixed solution of the edge simplicity constraint to avoid the gauge averaging, so that the scalar constraint operator given by standard loop regularization method captures the information of those intrinsic and extrinsic curvature correctly. This is the first issue which points out the necessity to consider then gauge fixed solution space with respect to simplicity constraint. The second issue which points out this necessity is the the Fermion coupling problem in all dimensional LQG <cit.>. Specifically, the strong imposition of the quantum edge simplicity constraint restricts that the holonomies in all dimensional LQG can only be represented in the simple representation space of SO(D+1), which leads that the holonomies can not transform the Fermions which take values in the spinor representation space of SO(D+1) for D≥4. An alternative scheme to deal with this issue is to consider the gauge fixed solution of quantum edge simplicity constraint based on the coherent states, which ensures that the holonomies could take matrixes in the spinor representation space of SO(D+1), so that they are able to describe the transformation of Fermions along edges.
Usually, in the classical theory, the gauge fixing can be realized by restricting the physical considerations on a section of the gauge orbits on the constraint surface of edge simplicity constraint. However, this is not valid in the quantum theory, since the wave functions of the quantum states which sharply converge to the constraint surface of edge simplicity constraint are always dispersed along the gauge orbits.
To overcome this problem, it is reasonable to consider the coherent state whose wave function peaks at a point in the phase space, so that one could have the state whose wave function converges to both of the constraint surface of edge simplicity constraint and a section of the gauge orbits, with this convergence is controlled by the width of the wave function of the coherent state.
Such kind of coherent state whose wave function peaks at a point in the SO(D+1) holonomy-flux phase space could be constructed by following a similar procedure as the construction of the simple twisted geometry coherent state in the strong solution space of quantum edge simplicity constraint <cit.>. More specifically, one need to consider a more generalized twisted geometry parametrization, which is able to coordinate the (almost) whole SO(D+1) holonomy-flux phase space instead of the reduced phase space. Then, based on this more generalized twisted geometry parametrization, one could decompose the heat-kernel coherent state of SO(D+1) and select some dominant terms to formulate the twisted geometry coherent state involving the non-simple representations of SO(D+1), which will be referred as to the non-simple twisted geometry coherent state in all dimensional LQG.
As the first step to establish the non-simple twisted geometry coherent state in all dimensional LQG, it is necessary to extend the twisted geometry parametrization to the full SO(D+1) holonomy-flux phase space. In this article, we will establish the twisted geometry parametrization of a dense subspace of the full SO(D+1) holonomy-flux phase space, and extend this parametrization as a symplectic-morphism. Besides, we will show that the twisted geometry parametrization of
edge simplicity constraint surface introduced in our previous work <cit.> can be regarded as a special cases of the construction in this article.
This article is organized as follows. In our brief review of the classical connection formulation of all dimensional GR in Section <ref>, we will also introduce the SO(D+1) holonomy-flux phase space and the discretized formulation of the kinematical constraints. In Section <ref> and Section <ref> we will introduce the twisted geometry parametrization for a dense subspace of the SO(D+1) phase space, and analyze the Poisson structures among the new geometric parametrization variables. Then, in Section <ref> we will discuss the relation between the twisted geometry parametrizations of the edge simplicity constraint surface and the dense subspace of the SO(D+1) holonomy-flux phase space. Finally, we will conclude with the outlook for the possible next steps of the future research.
§ PHASE SPACE OF ALL DIMENSIONAL LOOP QUANTUM GRAVITY
§.§ Connection phase space
The classical connection formulation of GR with arbitrary spacetime dimensionality of (1+D) is first developed by Bodendofer, Thiemann and Thurn in Ref.<cit.>. This continuum connection phase space is coordinatized by a so(D+1) valued 1-form field A_aIJ and a vector field π^bKL on the D-dimensional spatial manifold Σ, with the non-trivial Poisson brackets between them being given by
{A_aIJ(x), π^bKL(y)}=2κβδ_a^bδ_[I^Kδ_J]^Lδ^(D)(x-y),
where β is the Barbero-Immirzi parameter and κ is the gravitational constant. It is known that this connection phase space correctly reduces to the familiar ADM phase space after the standard symplectic reduction procedure with respect to the first-class constraint system composed by the Gauss
constraints 𝒢^IJ≈0 and simplicity constraints S^ab_IJKL:=π^a[IJπ^|b|KL]≈0. Specifically, the simplicity constraint can be solved as π^aIJ=2√(q)n^[Ie^|a|J], where e^a_I is a dual D-bein field, n^I satisfying n^In_I=1 is determined by e^a_I with n^Ie_aI=0, and q is the determinant of the spatial metric q_ab which is determined by π^aIJ with q^ab=e^aIe^b_I on the simplicity constraint surface. One can split A_aIJ as
A_aIJ≡Γ_aIJ(π)+β K_aIJ
where Γ_aIJ(π) is a functional of π^aIJ and it satisfies Γ_aIJ(π)=Γ_aIJ(e) on the simplicity constraint surface, with Γ_aIJ(e) being the unique torsionless spin connection compatible with the D-bein e_aI. Then, the densitized extrinsic curvature can be given by K̃_a^ b=K_aIJπ^bIJ on the constraint surface of both Gaussian and simplicity constraint surface.
It is easy to check that the Gaussian constraint generate the standard SO(D+1) gauge transformation of the connection field and its conjugate momentum. Now, let us consider the simplicity constraints from the perspectives of the corresponding gauge transformations. First, the solutions π^aIJ=2√(q)n^[Ie^|a|J] to the simplicity constraint introduced above defines the constraint surface of the simplicity constraints. Then, one can verify that the infinitesimal gauge transformations induced by simplicity constraints are given by <cit.>
δ K_c^PQ={∫_Σd^Dxf_ab^IJKLπ^a_[IJπ^b_KL](x), K_c^PQ(y)}=4κ f_cb^[PQKL]π^b_KL(y).
Notice that, on the simplicity constraint surface we have π^aIJ=2√(q)n^[Ie^|a|J] so that
δ K_c^IJn_I=0. Further, by introducing the decomposition
K_aIJ≡ 2n_[IK_|a|J]+K̅_aIJ,
where K̅_aIJ:=η̅_I^Kη̅_J^LK_aKL with η̅^I_J=δ^I_J-n^I n_J and K̅_aIJn^I=0, we immediately find that K̅_aIJ is the pure gauge component, while the components 2n_[IK_|a|J] are gauge invariant with respect to the transformations given in (<ref>). From the expressions of the ADM variables qq^ab=1/2π^aIJπ^b_IJ and K̃_a^ b=K_aIJπ^bIJ, it is easy to see that these variables are indeed gauge invariant with respect to the simplicity constraints on the constraint surface. Thus, through the symplectic gauge reduction procedure, the simplicity constraints eliminate the two parts of degrees of freedom— restricting π̅^aIJ:=π^aIJ-2√(q)n^[Ie^|a|J]=0 by the constraint equation and removing the pure-gauge components K̅_aIJ:=η̅_I^Kη̅_J^LK_aKL. Following these results, the geometric variables constructed by the ADM variables (q_ab,K̃^cd) can be extended as functionals in the connection phase space, with their original geometric interpretation are remained on the constraints surface.
§.§ Holonomy-flux phase space
The quantization of the connection formulation of (1+D)-dimensional GR can be carried out by following the standard loop quantization procedures, which leads to a Hilbert space ℋ given by the completion of the space of cylindrical functions on the quantum configuration space <cit.>. This Hilbert space ℋ can be regarded as a union of the spaces ℋ_γ=L^2((SO(D+1))^|E(γ)|,dμ_Haar^|E(γ)|) on all possible graphs γ, where E(γ) denotes the set of edges of γ and dμ_Haar^|E(γ)| denotes the product of the Haar measure on SO(D+1). The Gaussian constraint and simplicity constraint can be promoted as constraint operators in this Hilbert space. However, it has been turned out that the quantum brackets among these constraints give an open and anomalous quantum algebra, which is distinguished with the corresponding constraint algebra of first class in connection phase space <cit.>. Hence, it is necessary to propose a proper treatment of these quantum constraints, to reduce the gauge degrees of freedom and remain the physical degrees of freedom correctly. A reasonable method to reach this goal is to construct the gauge reductions with respect to Gaussian and simplicity constraints in the holonomy-flux phase space. More specifically, since the classical constraint algebras in the holonomy-flux phase space are isomorphic to the quantum constraint algebras in the quantum theory, one can treat the Gaussian and simplicity constraints in the holonomy-flux phase space and quantum theory on the same footing. Then, the degrees of freedom reduced in the procedures of the imposition of quantum constraint operators can be reflected in the procedures of the gauge reductions with respect to Gaussian and simplicity constraints in the holonomy-flux phase space. Through this gauge reductions, one can clarify the gauge degrees of freedom and verify that if the treatment of these constraints remains correct physical degrees of freedom. Now, let us first give a brief review of the holonomy-flux phase space.
The quantum geometry of loop quantum gravity is described based on the spatially smeared variables — the D-bein fluxes over (D-1)-dimensional faces and connection holonomies over paths— for the conjugate pairs of elementary variables. We will focus on the holonomies and fluxes based on one specific graph for the following. The edges of the given graph naturally provide the set of paths for a fixed set of holonomies, and the cell decomposition dual to the graph provides the set of (D-1)-faces specifying a fixed set of fluxes. In this setting, the holonomy over one of the edges is naturally conjugating to the flux over the (D-1)-face traversed by the edge, with this pair satisfies the smeared version of the Poisson algebra (<ref>), and thus form a new phase space. More precisely, given the graph γ embedded in the spatial manifold, we consider a new algebra given by the holonomy-flux variables (h_e, X_e)∈ SO(D+1)× so(D+1) over all edges e of γ. These pairs of variables represent the discretized version of the connection A_aIJ and its conjugate momentum π^bKL. Specifically, the holonomy of A_aIJ along an edge e∈γ defined by
h_e[A]:=𝒫exp(∫_eA)=1+∑_n=1^∞∫_0^1dt_n∫_0^t_ndt_n-1...∫_0^t_2 dt_1A(t_1)...A(t_n),
where A(t):=1/2ė^aA_aIJτ^IJ, ė^a is the tangent vector field of e, τ^IJ is a basis of so(D+1) given by (τ^IJ)^def._KL=2δ^[I_Kδ^J]_L in definition representation space of SO(D+1), and 𝒫 denotes the path-ordered product.
The flux X^IJ_e of π^aIJ through the (D-1)-dimensional face dual to edge e is defined by
X^IJ_e:=-1/4β a^D-1tr(τ^IJ∫_e^⋆ϵ_aa_1...a_D-1h(ρ^s_e(σ)) π^aKL(σ)τ_KLh(ρ^s_e(σ)^-1)),
where a is an arbitrary but fixed constant with the dimension of length, e^⋆ is the (D-1)-face traversed by e in the dual lattice of γ, ρ_e^s(σ): [0,1]→Σ is a path connecting the source point s_e∈ e to σ∈ e^⋆ such that ρ_e^s(σ): [0,1/2]→ e and ρ_e^s(σ): [1/2, 1]→ e^⋆. The Poisson algebra between the holonomy-flux variables can be induced from the Poisson bracket (<ref>) between the connection variables, which reads
{h_e, h_e'}=0, {h_e, X^IJ_e'}=δ_e,e'κ/a^D-1d/dλ(e^λτ^IJh_e)|_λ=0,
{X^IJ_e, X^KL_e'}=δ_e,e'κ/2a^D-1(-δ^IKX_e^JL-δ^JL X^IK_e+δ^ILX_e^JK+δ^JKX_e^ IL).
Notice that h_e∈ SO(D+1), X_e^IJ∈ so(D+1) and SO(D+1)× so(D+1)≅ T^∗ SO(D+1), the new discrete phase space called the holonomy-flux phase space of SO(D+1) loop quantum gravity on a fixed graph, is a direct product of SO(D+1) cotangent bundles. Finally, the complete phase space of the theory is given by taking the union over the holonomy-flux phase spaces of all possible graphs. Similar to the SU(2) case, the phase space coordinated by the holonomy-flux variables (h_e, X_e) of SO(D+1) loop quantum gravity can be regarded as the discretized version of the continuum phase space.
The (discretized) Gaussian and simplicity constraints in the holonomy-flux phase space are constructed in agreement with the corresponding quantum constraints. With X_-e=-h_e^-1X_eh_e≡X̃_e, the (discretized) Gaussian constraints G_v^IJ≈0 for each vertex v∈γ of the graph take the form <cit.>
G_v^IJ=∑_e|s(e)=vX_e^IJ+∑_e|t(e)=vX̃_e^IJ≈0,
where s(e) and t(e) denote the source and target points of the oriented edge e respectively. The (discretized) simplicity constraints consist of the edge simplicity constraints S^IJKL_e≈0 and vertex simplicity constraints S^IJKL_v,e,e'≈0, which take the forms <cit.>
S_e^IJKL≡ X^[IJ_e X^KL]_e≈0, ∀ e∈γ, S_v,e,e'^IJKL≡ X^[IJ_e X^KL]_e'≈0, ∀ e,e'∈γ, s(e)=s(e')=v.
It has been shown that, since the commutative Poisson algebra between the conjugate momentum variables {π^bKL} becomes non-commutative Poisson algebra between the flux variables { X^KL_e} after the smearing, the Poisson algebra among the discrete version of simplicity constraints become non-closed and thus anomalous, which leads that the symplectic reductions in the holonomy-flux phase space becomes difficult to implement <cit.>. To deal with this issue, the twisted geometry parametrization of the holonomy-flux phase space is constructed, which ensures that the gauge reductions with respect to the Gaussian and simplicity constraint in the holonomy-flux phase space can be carried out with the guidance of the twisted geometric interpretation of the holonomy-flux variables <cit.>.
The twisted geometry parametrization for the the SU(2) holonomy-flux variables of (1+3)-dimensional LQG is first introduced by a series of studies following the original works by Freidel and Speziale <cit.><cit.>. The space of the twisted geometry for SU(2) LQG can undergo a symplectic reduction with respect to the discretized Gauss constraints, giving rise to a reduced phase space containing the discretized ADM data of a polyhedral Regge hypersurface. Following a similar procedure, the twisted geometry parametrization in all dimensional SO(D+1) LQG has been constructed on the edge simplicity constraint surface in the SO(D+1) holonomy-flux phase space in our companion paper <cit.>. It has been shown that the gauge reductions with respect to the simplicity constraints and Gaussian constraints in SO(D+1) LQG can be carried out properly in the twisted geometry parametrization space, which leads to a clear correspondence between the original holonomy-flux variables (h_e, X_e) on edge simplicity constraint surface and the D-hypersurface discrete geometry data in Regge geometry formulation. Nevertheless, it is not enough to construct the twisted geometric parametrization on the edge simplicity constraint surface in the SO(D+1) holonomy-flux phase space.
As we have mentioned in introduction, several explorations in the quantum theory of SO(D+1) LQG requires us consider the quantum states whose wave functions are dispersed beyond the edge simplicity constraint surface. Hence, it is necessary to extend the twisted geometry parametrization to interpret the phase space points which are not located in the edge simplicity constraint surface.
§ GEOMETRIC PARAMETRIZATION OF SO(D+1) HOLONOMY-FLUX PHASE SPACE
To ensure our statements and the notations clearer, we will first generalize the twisted geometry parametrization to a dense subspace of T^∗ SO(D+1) in this section. Then, it will be left to section 5 to discuss the relation between the twisted geometry parametrizations constructed in this article and previous works <cit.>.
§.§ Beyond the edge-simplicity constraint surface
Recall the SO(D+1) holonomy-flux phase space ×_e∈γT^∗ SO(D+1)_e associated to the given graph γ. Let us focus on the holonomy-flux phase space T^∗ SO(D+1) associated to a single edge without loss of generality. Notice that the semi-simple elements in so(D+1) compose a dense subset so(D+1)_ss⊂ so(D+1) and we have T^∗ SO(D+1)≅ SO(D+1)× so(D+1). Then, we can define a dense subspace of T^∗ SO(D+1) as
T_ss^∗ SO(D+1):={(h, X)| h∈ SO(D+1), X is a semi-simple element of so(D+1)}.
To give the explicit formulation of the twisted geometric parametrization of T_ss^∗ SO(D+1), let us first introduce some new notations. Consider the orthonormal basis {δ_1^I,δ_2^I,...,δ_D+1^I} of ℝ^D+1, one has the basis {τ_IJ} of so(D+1) given by τ_IJ=(τ_IJ)^KL_def.:=2δ_I^[Kδ_J^L] in the definition representation space of SO(D+1), where (τ_IJ)^KL_def. is the generator of the infinitely small rotation in the 2-dimensional vector space spanned by the two vectors δ_I^K and δ_J^L.
Then, let us introduce the maximum commutative sub-Lie algebra of so(D+1) spanned by {τ_1, τ_2,...,τ_m} with m=[D+1/2], where we define
τ_1:=τ_12, τ_2:= τ_34, ..., τ_m:= τ_D,D+1
for D+1 being even, and
τ_1:=τ_12, τ_2:= τ_34, ..., τ_m:= τ_D-1,D
for D+1 being odd.
This maximum commutative sub-Lie algebra of so(D+1) generates the maximum commutative subgroup 𝕋^m:=×_=1^m SO(2)_, m=[D+1/2]. Then, SO(D+1) can be regarded as a fiber Bundle with the fibers 𝕋^m on the base manifold ℚ_m:=SO(D+1)/𝕋^m, which can be also given by ℚ_m={𝕍:=(V_1,...,V_m)|V_=gτ_ g^-1, ∈{1,...,m}, g∈ SO(D+1)}. One can choose a Hopf section n: ℚ_m↦ SO(D+1), 𝕍↦ n(𝕍)
and another Hopf section ñ: ℚ̃_m↦ SO(D+1), 𝕍̃↦ñ(𝕍̃) for the copy ℚ̃_m of ℚ_m, which satisfy
V_1=nτ_1n^-1,...,V_m=nτ_mn^-1,
and
Ṽ_1=-ñτ_1ñ^-1,...,Ṽ_m=-ñτ_mñ^-1
with ℚ_m∋𝕍:=(V_1,...,V_m) and ℚ̃_m∋𝕍̃:=(Ṽ_1,...,Ṽ_m).
Observe that the choice for the Hopf sections is clearly non-unique, and from now on our parametrization will be given under one fixed choice of {n_e,ñ_e} for each edge e.
Then, in the subspace T_ss^∗ SO(D+1)_e associated to each edge e, the generalized twisted geometry parametrization can be given by the map
(𝕍_e,𝕍̃_e,η⃗_e,ξ⃗_e)↦(h_e, X_e)∈ T_ss^∗ SO(D+1)_e: X_e=1/2n_e(η_e^1 τ_1+...+η_e^m τ_m)n_e^-1
h_e=n_ee^ξ_e^1τ_1...e^ξ_e^mτ_mñ_e^-1,
where we defined η⃗_e:=(η_e^1,...,η_e^m), η_e^1,η_e^2,...,η_e^m∈ℝ with η_e^1≥η_e^2≥,...,≥η_e^m≥0 and ξ⃗:=(ξ_e^1,...,ξ_e^m) with ξ_e^1,...,ξ_e^m
∈(-π,π]. By defining η_e^1=:χ_e^1+...+χ_e^m, η_e^2 =:χ_e^2+...+χ_e^m, ..., η_e^m-1=:χ_e^m-1+χ_e^m, η_e^m=:χ_e^m with χ_e^1,...,χ_e^m≥ 0, one can replacing η⃗_e by χ⃗_e:=(χ_e^1,...,χ_e^m) in the parametrization (<ref>).
The twisted geometry parametrization (<ref>) of T_ss^∗ SO(D+1)_e associated to a single edge can be directly extended to the whole graph γ.
Correspondingly, one can introduce the Levi-Civita holonomies {h^Γ_e|e∈γ} determined by the fluxes {X_e∈ so(D+1)_ss|e∈γ} and {X̃_e∈ so(D+1)_ss|e∈γ}, which takes the form
h^Γ_e≡ n_ee^ζ_e^1τ_1...e^ζ_e^mτ_mñ_e^-1.
Note that the variables (ζ_e^1,...,ζ_e^n) are well-defined via the given h^Γ_e and the chosen Hopf sections, thus
(ζ_e^1,...,ζ_e^n) are already fixed by the given {X_e∈ so(D+1)_ss|e∈γ} and {X̃_e∈ so(D+1)_ss|e∈γ}. Then, one can factor out h^Γ_e from h_e through the expressions
h_e= (e^(ξ_e^1-ζ_e^1)n_eτ_1n_e^-1...e^(ξ_e^m-ζ_e^m)n_eτ_mn_e^-1) h^Γ_e =h^Γ_e(e^(ξ_e^1-ζ_e^1)ñ_eτ_1ñ_e^-1... e^(ξ_e^m-ζ_e^m)ñ_eτ_mñ_e^-1)
in the perspectives of the source point and target point of e respectively.
The above decomposition with twisted geometry parameters can be adopted to the splitting of the the Ashtekar connection as A_a=Γ_a+β K_a on a given graph. Specifically, one can consider the integral of A_a=Γ_a+β K_a∈ so(D+1) along an infinitesimal edge direction ℓ^a_e, which leads to A_e≡ A_aℓ^a_e, Γ_e≡Γ_aℓ^a_e and K_e≡ K_aℓ^a_e. Clearly, we can establish the following correspondence of
h_e= e^A_e and h^Γ_e= e^Γ_e.
The remaining factor should account for the K_e. According to the above discussion, the value of K_e may thus be expressed in the perspectives of the source point and target point of e, respectively as
(e^(ξ_e^1-ζ_e^1)n_eτ_1n_e^-1...e^(ξ_e^m-ζ_e^m)n_eτ_mn_e^-1) =e^β K_e
or
(e^(ξ_e^1-ζ_e^1)ñ_eτ_1ñ_e^-1... e^(ξ_e^m-ζ_e^m)ñ_eτ_mñ_e^-1)= e^β K_e .
Further, we have
K_e =1/βn_e((ξ_e^1-ζ_e^1)τ_1+...+(ξ_e^m-ζ_e^m)τ_m)n_e^-1
or
K_e =1/βñ_e((ξ_e^1-ζ_e^1)τ_1+...+(ξ_e^m-ζ_e^m)τ_m)ñ_e^-1
when it is expressed in the perspectives of the source point and target point of e respectively.
The set of the variables ((η^e_1,...,η^e_m), (ξ^e_1,...,ξ^e_m),𝕍_e, 𝕍̃_e) gives the generalization of twisted geometry parametrization for the SO(D+1) holonomy-flux phase space. Comparing with the twisted geometry parametrization for the edge-simplicity constraint surface in the SO(D+1) holonomy-flux phase space introduced in our companion paper <cit.>, this generalized parametrization scheme covers the dense subset of the SO(D+1) holonomy-flux phase space, which are far beyond the edge-simplicity constraint surface. We will now carry out an analysis of the symplectic structure of the SO(D+1) holonomy-flux phase space based on the variables ((η^e_1,...,η^e_m), (ξ^e_1,...,ξ^e_m),𝕍_e, 𝕍̃_e) , before coming back to provide more support on the relation between the generalized parametrization scheme in this paper and that only for the edge simplicity constraint surface given in our companion paper <cit.>.
§ SYMPLECTIC ANALYSIS OF SO(D+1) HOLONOMY-FLUX PHASE SPACE
Notice that the discussions in this section only depend on each single edge of the graph. To simplify our notations, we will focus on the analysis on a single edge and omit the label e without loss of generality.
§.§ Symplectic structure of SO(D+1) holonomy-flux phase space
The symplectic structure of SO(D+1) holonomy-flux phase space has been discussed in our companion paper <cit.>, let us give a brief review of the main notations as follows.
Recall that the SO(D+1) holonomy-flux phase space associated with each edge of a given graph can be given by the group cotangent space T^*SO(D+1), as a phase space it enjoys the natural symplectic structure of the T^*SO(D+1). To give the explicit formulation of this symplectic structure, let us introduce the function f(h) on SO(D+1)∋ h, and the element p_X∈ so(D +1)^∗ which is a linear function of Y∈ so(D+1) defined by
p_X(Y)≡ X^KLY_KL,
where X=X^KL∈ so(D+1).
A right-invariant vector field X̂ associated to the Lie algebra element X∈ so(D+1), acts on a function f(h) via the right derivative ∇_X^R as
∇_X^Rf(h)≡d/dtf(e^-tXh)|_t=0;
under the adjoint transformation X↦ -hXh^-1, we obtain the corresponding left derivative
∇_X^Lf(h)≡d/dtf(he^tX)|_t=0=-∇^R_hXh^-1f(h).
One can straightforwardly show that the map from the right invariant vector fields X̂ to the corresponding elements X∈ so(D+1) is given by the algebra-valued, right-invariant 1-form dhh^-1, which reads
i_X̂(dhh^-1)=(ℒ_X̂h)h^-1=-X,
where i denotes the interior product, and ℒ_Ŷ≡ i_Ŷd+di_Ŷ denotes the Lie derivative.
Now, the natural symplectic potential for T^∗ SO(D+1) can be expressed as
Θ≡ X^IJ(dhh^-1)_IJ≡Tr(Xdhh^-1).
The symplectic 2-form then follows as
Ω≡ -dΘ=- dTr(Xdhh^-1)=1/2Tr(dX̃∧ h^-1dh-dX∧ dhh^-1)
where we have introduced X̃≡-h^-1Xh. From the symplectic 2-form, the Poisson brackets among the interesting phase space functions f≡ f(h) and p_Y≡ p_Y(X)=Y^IJX_IJ is given by <cit.>
{p_Y,p_Z}=p_[Y,Z], {p_Y,f(h)}=∇^R_Yf(h), {f(h),f'(h)}=0.
One can see from the brackets (<ref>) that the Poisson action of p_Y(X) generates left derivatives. Similarly, it is easy to check that the action of p̃_Y(X)≡ Y^IJX̃_IJ with X̃=-h^-1Xh generate the right derivative {p̃_Y,f(h)}=∇^L_Yf(h). Moreover, one can check the commutative relation {p_Y,p̃_Z}=0. Finally, it is easy to verify that, by setting 2κ/a^D-1=1, the Poisson brackets (<ref>) given by the natural symplectic potential (<ref>) for T^∗ SO(D+1) are identical with the one (<ref>) induced by the symplectic structure (<ref>) in the SO(D+1) connection phase space <cit.>. In the following part of this article, we will analyze the symplectic structure on T^∗ SO(D+1) based on the symplectic potential Θ without loss of generality.
§.§ Symplectomorphism between SO(D+1) holonomy-flux phase space and generalized twisted geometry parameter space
From now on, let us focus on the analysis on one single edge e of given graph γ, and we omit the the label e for all of the notations.
Denote by B:=ℚ_m×ℚ̃_m × (×_=1^m ℝ^_+)×(× _=1^m S^1_) the collection of the generalized twisted geometric parameters (𝕍,𝕍̃,χ⃗,ξ⃗). It is easy to see that the map (<ref>) is not a one to one mapping. More explicitly, one can decompose B=B_0∪Ḃ with
Ḃ:= B|_η_m> 0
and
B_0:= B∖Ḃ.
Then, one can find that the map (<ref>) is a one to one mapping between Ḃ and its image Ḃ^∗⊂ T_ss^∗ SO(D+1), while it is a many to one mapping between
B_0 and its image B_0^∗⊂ T_ss^∗ SO(D+1). We will first focus on the symplectic structure on B in this subsection, and then go back to consider the many to one mapping between
B_0 and its image B_0^∗ in section <ref>.
The one to one mapping between Ḃ and its image Ḃ^∗⊂ T_ss^∗ SO(D+1) is also an isomorphism
Ḃ→Ḃ^∗⊂ T_ss^∗ SO(D+1).
Based on the isomorphism (<ref>), we may use the generalized twisted geometric parameters to express the induced symplectic structure of Ḃ^∗⊂ T_ss^∗ SO(D+1) inherited from the phase space T^*SO(D+1). First, the induced symplectic potential can be expressed as
Θ_Ḃ^∗ = Tr(Xdhh^-1)|_Ḃ^∗⊂ T_ss^∗ SO(D+1)⊂ T^∗ SO(D+1)
= 1/2∑_'=1^mη_'Tr(nτ_'n^-1 (dnn^-1+n(∑_dξ^τ_)n^-1-ne^∑_ξ^τ_ñ^-1dññ^-1ñe^-∑_ξ^τ_ n^-1))
= 1/2∑_=1^mη_Tr(V_ dnn^-1)+ ∑_=1^mη_dξ^- 1/2∑_=1^mη_Tr(Ṽ_ dññ^-1).
In the space B, one can extend the potential Θ_Ḃ=Θ_Ḃ^∗ in the limit η_m→0 and define
Θ_B≡1/2∑_=1^mη_Tr(V_ dnn^-1)+ ∑_=1^mη_dξ^- 1/2∑_=1^mη_Tr(Ṽ_ dññ^-1)
as the symplectic potential on B. This potential gives the sympletic form Ω_B as
Ω_B=-dΘ_B = 1/2∑_=1^mη_Tr(V_ dnn^-1∧ dnn^-1)-1/2∑_=1^mη_Tr(Ṽ_ dññ^-1∧ dññ^-1)
-∑_=1^mdη_∧ (dξ_+1/2Tr(V_ dnn^-1)-1/2Tr(Ṽ_ dññ^-1)).
It is clear that in the η_m=0 region of the above (pre-)symplectic structure is degenerate, as expected due to the degeneracy in the parametrization itself in the η_m= 0 region of T_ss^∗ SO(D+1).
We are interested in the Poisson algebras between these twisted-geometry variables using the presymplectic form Ω_B. In order to give the explicit Poisson brackets, in the following section we will study the Hopf sections n(𝕍) and ñ(𝕍̃) in the perspectives of their contributions to the Hamiltonian fields on B defined by Ω_B .
§.§ Geometric action on the Hopf section and its decomposition
§.§.§ Geometric action on the Hopf section
The Hopf map is defined as a special projection map π: SO(D+1)↦ℚ_m with ℚ_m:=SO(D+1)/𝕋^m, such that every element in ℚ_m comes from an orbit generated by the maximal subgroup 𝕋^m of SO(D+1) that fixed all of the elements in the set {τ_1,τ_2,...,τ_m}. In the definition representation of SO(D+1) the Hopf map reads
π: SO(D+1) → ℚ_m
g → 𝕍(g)=(gτ_1g^-1, gτ_2g^-1,...).
Note that 𝕍(g) is invariant under g↦ g^α_1,α_2,...,α_m=ge^α_1τ_1+α_2τ_2+...α_mτ_m, thus it is a function of D(D+1)/2-[D+1/2] variables only. This result shows that SO(D+1) can be seen as a bundle (which is referred to as Hopf bundle) over ℚ_m with the 𝕋^m
fibers. On this bundle we can introduce the Hopf sections, each as an inverse map to the above projection
n: ℚ_m → SO(D+1)
𝕍 ↦ n(𝕍),
such that π(n(𝕍))=𝕍. This section assigns a specific SO(D+1) element n to each member of the ℚ_m, and it is easy to see that any given section n is related to all other sections via n^α_1,α_2,...,α_m≡ ne^α_1τ_1+α_2τ_2+...α_mτ_m; hence the free angles {α_1,α_2,...,α_m} parametrize the set of all possible Hopf sections.
Notice that each algebra element X∈ so(D+1) can be associated to a vector field X̂ on ℚ_m, which acts on a function f(𝕍) of ℚ_m as
ℒ_X̂f(𝕍):=d/dtf(e^-tX𝕍e^tX)|_t=0,
where g𝕍g^-1:=(gV_1g^-1, gV_2g^-1,...,gV_mg^-1) with g∈ SO(D+1). Similarly, for a so(D+1) valued function S=S(𝕍) on ℚ_m, it can be also associated to a vector field Ŝ on ℚ_m, , which acts on the function f(𝕍) of ℚ_m as
ℒ_Ŝf(𝕍):=d/dtf(e^-tS𝕍e^tS)|_t=0.
Specifically, for the linear functions we have
ℒ_X̂𝕍:=(ℒ_X̂V_1,..., ℒ_X̂V_m)=(-[X,V_1],...,-[X,V_m])=:-[X,𝕍].
Especially, we are interested in the action of the vector fields on the Hopf section n. Notice that we have
ℒ_X̂V_(n)=(ℒ_X̂n)τ_ n^-1 +nτ_(ℒ_X̂n^-1)=[(ℒ_X̂n)n^-1, V_], ∀∈{1,...,m}.
Comparing this result with (<ref>), we deduce that
(ℒ_X̂n)n^-1=-X+∑_V_ F^_X(𝕍),
where F^_X(𝕍) are functions on ℚ_m, so that V_ F^_X(𝕍) commuting with the element 𝕍 for all .
Lemma.
The solution functions L_^IJ≡ L^: ℚ_m↦ so(D+1) of the equations
Tr(L^ dnn^-1)=0, L_^IJV_',IJ=δ_,',
appears in the Lie derivative of the Hopf map section n(𝕍) as,
L^_X:=L^IJ_ X_IJ=F^_X
and it satisfies the key coherence identity
ℒ_X̂L^_Y-ℒ_ŶL^_X=L^_[X,Y].
Finally, the general solution to this identity satisfying the conditions L_^IJV_',IJ=δ_,' is given by
L'^_X=L^_X+ℒ_X̂α^
where α^ is a function on ℚ_m.
Proof.
To prove Eq.(<ref>), let us take the interior product of an arbitrary vector field X̂ with the definition Tr(L^ dnn^-1)=0 and consider (ℒ_X̂n)n^-1=i_X̂(dnn^-1) given by the definition of Lie derivative, we have
0=i_X̂Tr(L^ dnn^-1)=Tr(L^(ℒ_X̂n)n^-1) =-Tr(L^ X)+∑_'=1^mF^'_XTr(L^ V_')=-L^_X+F^_X,
where we used Tr(L^ V_')=L_^IJV_',IJ=δ_,' and (<ref>). Thus, we proved F^_X=L^_X.
To prove Eq.(<ref>), we first consider that
ℒ_X̂(dnn^-1) = i_X̂(dnn^-1∧ dnn^-1)+d[(ℒ_X̂n)n^-1]
= [-X+∑_V_ L^_X,dnn^-1]+d(-X+∑_V_ L^_X)
= ∑_V_ dL^_X-[X,dnn^-1],
where we used the definition of Lie derivative in the first equality, Eq.(<ref>) in the second and dV_=[dnn^-1,V_] in the third. Then, the above equation leads to
0=ℒ_X̂Tr(L^ dnn^-1) =Tr((ℒ_X̂L^-[L^,X])dnn^-1) +dL^_X
by using the equalities Tr(L^ V_')=δ_,'.
Further, let us take the interior product of Eq.(<ref>) with Ŷ and we get
ℒ_ŶL^_X = Tr((ℒ_X̂L^-[L^,X] )(Y-∑_'V_' L^'_Y))
= ℒ_X̂L^_Y-L^_[X,Y]-∑_'L^'_Y(Tr((ℒ_X̂L^)V_') -Tr(L^[X,V_']))
= ℒ_X̂L^_Y-L^_[X,Y]-∑_'L^'_Yℒ_X̂(Tr(L^ V_') ),
where the last term vanishes, thus we obtain the coherence identity (<ref>).
To show Eq.(<ref>), let us suppose that we have another solution L'^ to the coherence identity and also the condition Tr(L'^ V_')=L'^IJ_ V_',IJ=δ_,'. Considering the 1-form ϕ^≡ -Tr(L'^ dnn^-1), one can see that its contraction with X̂
ϕ^_X≡ i_X̂ϕ^=-Tr(L'^ (ℒ_X̂n)n^-1)=L'^ _X-L^_X
is the difference between the two solutions L'^ _X and L^_X. Thus, ϕ^_X is also a solution to the coherence identity (<ref>). This result together with the
definition of the differential i_X̂i_Ŷdϕ^=ℒ_Ŷϕ^_X -ℒ_X̂ϕ^_Y+ϕ^_[X,Y] implies that dϕ^=0, which means that there exists a function α^ locally at least, such that ϕ^=dα^ and thus L'^_X=L^_X+ℒ_X̂α^. This proves the Eq. (<ref>).
□
Finally, let us recall that the freedom in choosing the Hopf section lies in the function parameters α^(𝕍) in the expression n'(𝕍)≡ n(𝕍)e^∑_α^(𝕍)τ_ for all possible choices of the sections. By applying Eq.(<ref>) to this n', we immediately get L'^_X= L^_X+ i_X̂dα^. Referring to (<ref>), we can conclude that the function L^ is exactly the function coefficient for the component of (dn)n^-1 in the V_ direction, which is determined by a choice of the Hopf section n.
§.§.§ Decomposition and sequence of the Hopf section
As we will see in following part of this article, the Hopf section n and the geometric action on it are closely related to the symplectic structure and the symplectic reduction on B. To analyze the Hopf section ℚ_m more explicitly, let us consider the decomposition of the Hopf section n. Recall the definition ℚ_m:=SO(D+1)/𝕋^m, one can decompose ℚ_m as
ℚ_m=𝔻_1×𝔻_2×...×𝔻_m
with
𝔻_1:=SO(D+1)/(SO(2)_τ_1× SO(D-1)_[τ_1]),
𝔻_2:=SO(D-1)_[τ_1]/(SO(2)_τ_2× SO(D-3)_[τ_2]),
...
𝔻_m:=SO(D+3-2m)_[τ_(m-1)]/SO(2)_τ_m,
where SO(2)_τ_ is the group generated by τ_ and SO(D+1-2)_[τ_] is the maximal subgroup of SO(D+1) which preserves (τ_1,...,τ_) and has the Cartan subalgebra spanned by (τ_(+1),...,τ_m). Here one should notice that both of SO(2)_τ_ and SO(D+1-2)_[τ_] preserve (τ_1,...,τ_). Then, the
Hopf section n can be decomposed as
n=n_1n_2...n_m.
This decomposition gives a sequence of the Hopf sections, which reads
n_1, n_1n_2, n_1n_2n_3, ..., n_1...n_m.
For a specific one n_1...n_ with ∈{1,...,m}, it gives
n_1...n_: 𝔻_1×...×𝔻_→ SO(D+1)
(V_1,...,V_)↦ n_1(V_1)n_2(V_1,V_2)...n_(V_1,...,V_),
where
V_1=n_1n_2...n_τ_1 n_^-1...n_2^-1n_1^-1=n_1τ_1 n_1^-1,
V_2=n_1n_2...n_τ_2 n_^-1...n_2^-1n_1^-1=n_1n_2τ_2n_2^-1n_1^-1,
...,
V_=n_1n_2...n_τ_ n_^-1...n_2^-1n_1^-1.
Here one should notice that the decomposition n=n_1...n_m is not unique. For instance, one can carry out the transformation
n_→ n_ g, n_+1→ g^-1n_+1
with g∈ SO(D+1) being arbitrary element which preserve (τ_1,...,τ_), and it is easy to verify that the transformation (<ref>) preserves the Hopf section n but changes n_ and n_+1 in the decomposition n=n_1...n_m. We can also establish the geometric actions on the Hopf sections n_1. Specifically, one can give
(ℒ_X̂n_1)n_1^-1=-X+V_1L̅^1_X (V_1)+∑_μV̅^μ_1 L̅^μ_X(V_1)
based on Eqs.(<ref>), (<ref>) and V_1=n_1τ_1n_1^-1, where V̅^μ_1=n_1τ̅^μ n_1^-1 with {τ̅^μ} being a basis of so(D-1)_τ_1, L̅^1_X (V_1)=L̅^1_IJ(V_1)X^IJ and L̅^μ_X(V_1)=L̅^μ_IJ(V_1) X^IJ are functions of V_1∈𝔻_1 <cit.>. It has been shown that
L̅^1_IJ(V_1) is the solution of the equations <cit.>
Tr(L̅^1 dn_1 n_1^-1)=0, Tr(L̅^1V_1)=1, and Tr(L̅^1 V̅^μ_1)=0, ∀μ.
By comparing Eq.(<ref>) and Eq.(<ref>), it is easy to see that L^1=L̅^1 is a solution of L^1 in Eq.(<ref>). This result will be a key ingredient in discussions in the next section.
Now, by applying the results of this section to the presymplectic form Ω_B, we will identify the Hamiltonian fields in B and compute the Poisson brackets.
§.§ Computation of Hamiltonian vector fields in pre-symplectic manifold B
Let us recall the pre-symplectic potential Θ_B≡1/2∑_=1^mη_Tr(V_ dnn^-1)+ ∑_=1^mη_dξ^- 1/2∑_=1^mη_Tr(Ṽ_ dññ^-1) induced from the SO(D+1) holonomy-flux phase space, which defines the pre-sympletic form Ω_B as
Ω_B=-dΘ_B = 1/2∑_=1^mη_Tr(V_ dnn^-1∧ dnn^-1)-1/2∑_=1^mη_Tr(Ṽ_ dññ^-1∧ dññ^-1)
-∑_=1^mdη_∧(dξ_+1/2Tr(V_ dnn^-1)-1/2Tr(Ṽ_ dññ^-1)).
The associated Poisson brackets can be calculated by considering the Hamiltonian vector fields on B. Let us denote the Hamiltonian vector field for the function f as ψ_f , where f∈{η_, ξ_, p_X≡1/2∑_η_ V^_X=1/2∑_η_ V^_IJX^IJ, p̃_X≡1/2∑_η_Ṽ^_X=1/2∑_η_Ṽ^_IJX^IJ}. Then, using i_ψ_fΩ_B=-df, the vector fields could be checked to be given by
ψ_p_X = X̂-∑_L^_X(𝕍)∂_ξ_, ψ_p̃_X = - X̂̃̂-∑_L^_X(𝕍̃)∂_ξ_, ψ_η_= -∂_ξ_.
Here X̂ are the vector fields generating the adjoint action on ℚ_m labelled by 𝕍, associated to the algebra elements X. Similarly, X̂̃̂ are the vector fields generating the adjoint action on ℚ_m labelled by 𝕍̃, associated to the algebra elements X.
Proof. The first equation of (<ref>) can be checked by considering
i_X̂Ω_B=-1/2∑_Tr(d(η_ V_)X)+∑_dη_ L^_X(𝕍).
Notice that we have i_∂_ξ_Ω_B=dη_, the first equation of (<ref>) follows immediately. The computation for ψ_p̃_X can be carried out similarly, with an opposite sign due to the reversal of the orientation.
□
§.§ Reduction of the pre-symplectic manifold B
Recall that in the η_m=0 region Ω_B is degenerate, as expected due to the degeneracy of the parametrization (<ref>) in the η_m= 0 region.
Let us now address this degeneracy to get a true symplectic manifold. We can reduce the pre-symplectic manifold B with respect to the vector fields Ê in the kernel of Ω_B, i.e. to consider the quotient manifold B̅≡ B/Ker(Ω_B). The result would be a symplectic manifold with non-degenerate 2-form given by the quotient projection of Ω_B.
In obtaining the space B̅, we can introduce the equivalence classes under the equivalence relation p∼ p' whenever p'=e^Êp, with Ê∈Ker(Ω_B) and p, p'∈ B. The operation is thus determined by the vector fields in the kernel of Ω_B. Since it is obvious that the vector fields Ê∈Ker(Ω_B) appear in the region with η_m=0, we look for the vector fields preserving the region while having the interior products with Ω_B proportional to η_. Let us first consider the vector fields
Ê_X≡ψ_p_X-ψ_p̃_Y,
where X∈ so(D+1), Y=-h^-1Xh with h being a group element rotating V^ to Ṽ^=-h^-1V^ h. Indeed, using the fact that V^_X=Ṽ^_Y, the interior product of the field D̂_X with the symplectic 2-form is
i_Ê_XΩ_B=-1/2∑_d(η_ V^_X-η_Ṽ^_Y)-1/2∑_η_Tr(Ṽ^ dY) =-1/2∑_η_Tr([V^,X]dnn^-1).
Now, let us analyze the degeneracy of i_Ê_XΩ_B. Denoted by K^ the subspace of B defined by η_=η_+1=...=η_m=0. Consider the so(D+1) valued functions F(V_1,...,V_(-1)) on K^ which satisfies
n_(-1)^-1...n_2^-1n_1^-1F(V_1,...,V_(-1))n_1n_2...n_(-1)∈ so(D+3-2)_[τ_(-1)],
where n_1n_2...n_(-1) determined by (V_1,...,V_(-1)) is from the sequence of the Hopf sections (<ref>), SO(D+1-2)_[τ_] is the maximal subgroup of SO(D+1) which preserves (τ_1,...,τ_) and has the Cartan subalgebra spanned by (τ_(+1),...,τ_m).
Then, we can define the vector fields Ê^_F by
Ê^_F:=Ê_X|_X=F(V_1,...,V_(-1)),
and one can verify i_Ê^_FΩ_B=0 on K^ by using Eq.(<ref>). Thus, notice the relation K^1⊂ K^2⊂...⊂ K^m, we have
Ker(Ω_B)≡{Ê^_F| ∈{1,...,m}}
on K^m.
Next, to find the equivalence class generated by the vector fields Ê^_F on K^, we note that the actions of the fields should rotate jointly the vectors (V_,..,V_m) and (Ṽ_,...,Ṽ_m), that is we have Ê^_F (V_')=-[F(V_1,...,V_(-1)),V_'], Ê^_F(Ṽ_')=-h^-1[F(V_1,...,V_(-1)),V_']h. Further, the actions preserves the group element h, since
Ê_X(h)=-Xh-hY=0
which ensures that Ê^_F(h)=0.
Therefore, given p and p' on K^, we have p'∼ p if and only if the two are related by a joint rotation in (V_,..,V_m) and (Ṽ_,...,Ṽ_m) and a h-preserving translations in (ξ_1,...,ξ_m). It is easy to see that the parametrization (<ref>) maps p and p'∼ p to the same image in T^∗_ssSO(D+1), as expected that the equivalence class generated by the vector fields Ê^_F on K^ also describes the degeneracy of the
parametrization (<ref>). After the quotient with respect to Ê^_F on each K_, we are left with a manifold K̅_ parametrized by only (η_1,...,η_(-1)), (V_1,...,V_m), (Ṽ_1,...,Ṽ_(-1)) and (ξ_1,...,ξ_m). Recall that B≡ B|_η_m>0∪ K^m and K^1⊂ K^2⊂...⊂ K^m, let us define
K̇^m:=K^m/Ker(Ω_B)
and then the quotient space B̅≡ B|_η_m>0∪K̇^m. Finally, we conclude that the parametrization (<ref>) gives a one to one map between B̅ and its image T^∗_ssSO(D+1), and it can be extended as a symplectic-morphism with B̅ being equipped with the symplectic structure Ω_B.
§.§ Poisson algebra among the twisted geometry parameters
Based on the Hamiltonian vector fields given by the pre-symplectic potential Θ_B, the Poisson brackets between the twisted geometry parameters can be given by
{ξ_,η_}=δ_,,
{p_X, p_Y}=p_[X,Y], {p̃_X, p̃_Y}=p̃_[X,Y]
{V^,η_}= {Ṽ^,η_}=0,
and
{V^,Ṽ^}=0.
Moreover, one can show that the Poisson brackets given by Θ_B between ξ_ and p_X, or the ones between ξ_ and p̃_X are non-trivial, and they are given by the function L^: ℚ_m→ so(D+1) in the form
{ξ_,p_X}= L^_X(𝕍), {ξ_,p̃_X}= L^_X(𝕍̃),
where L^_X≡Tr(L^ X) is the component of L^ along the algebra element X.
Especially, the Eqs. (<ref>) taken as the definition equations of the functions L^, together with the Poisson brackets (<ref>), already determined L^ to be exactly the results of the brackets {ξ_,p_X} and {ξ_,p̃_X} given by the potential Θ_B corresponding to our choice of the Hopf sections. This result can be shown by the fact that, the function L^ defined by Eqs.(<ref>) is constrained by two conditions given by the above Poisson brackets (<ref>), and these two conditions are exactly the definition of L^ in Lemma in section <ref>. Let us then illustrate the details of this fact as follows. The first one of the two conditions comes from the equation
p_IJL_^IJ=p_IJ{ξ_,p^IJ}=1/2{ξ_,p^IJp_IJ}= 1/4{ξ_,∑_η^2_} =1/2η_,
with p_IJ:=1/2∑_(η_ V^_IJ),
which gives the normalization condition L_^IJV^_IJ=δ_^ in Lemma in section <ref>. The second one of the two conditions just comes from the Jacobi identity
{ξ_,{p_X,p_Y}}+{p_X,{p_Y,ξ_}}+{p_Y,{ξ_,p_X}}=0,
from which we get
L^_[X,Y]-{p_X,L_Y^}+{p_Y,L_X^}=0,
By using
{p_X,L_Y^}=i_ψ_p_XdL_Y^=ℒ_X̂L_Y^,
one can write the identity (<ref>) as an identity involving
Lie derivatives and we get
ℒ_X̂L^_Y-ℒ_ŶL^_X=L^_[X,Y],
which is just the coherence identity in Lemma in section <ref>.
Now, it is easy to see these two conditions makes the Lemma in section <ref> applicable and we can verify the result given in the beginning of this paragraph.
§ RELATION WITH THE TWISTED GEOMETRY PARAMETRIZATIONS ON EDGE SIMPLICITY CONSTRAINT SURFACE
The twisted geometry parametrization introduce in this article is constructed in the space ×_e∈γT^∗_ssSO(D+1)_e, and we also have introduced the twisted geometry parametrization of the edge simplicity constraint surface ×_e∈γT^∗_esSO(D+1)_e in our companion paper <cit.>. Thus, it is worth to discuss the relation between these two types of parametrizations.
We also focus on the twisted geometry parametrizations of the space T^∗_ssSO(D+1) on a single edge without loss of generality. Then, by setting η_2=...=η_m=0 in Eq.(<ref>), we get
X=1/2η_1nτ_1n^-1
which parametrizes all of the simple fluxes satisfying X^[IJX^KL]=0 in so(D+1). Besides, recall the decomposition n=n_1...n_m of the Hopf section n, we get
X = 1/2η_1n_1τ_1n_1^-1
h = n_1e^ξ^1τ_1n̅ñ_1^-1
with n̅=n_2...n_me^ξ^2τ_2...e^ξ^mτ_m(ñ_2...ñ_m)^-1. Recall the edge simplicity constraint surface T_es^∗ SO(D+1) defined by
T_es^∗ SO(D+1)={(h,X)∈ T^∗ SO(D+1)|X^[IJX^KL]=0},
it is easy to see that T_es^∗ SO(D+1)⊂ T_ss^∗ SO(D+1) is parametrized by (η_1,ξ_1, V_1, Ṽ_1, n̅) based on Eq.(<ref>), where V_1=n_1τ_1n_1^-1, Ṽ_1=ñ_1τ_1ñ_1^-1 with the Hopf sections n_1 and ñ_1 being given by the decompositions
n=n_1...n_m and ñ=ñ_1...ñ_m respectively.
Thus, by restricting the consideration on the edge simplicity constraint surface, the parametrization (<ref>) reproduces the twisted geometry parametrization introduced in our companion paper <cit.>.
We can further consider the symplectic reduction with respect to the edge simplicity constraint, which can be expressed as 𝒮_IJKL≡ p_[IJp_KL]=0 with p_IJ:=1/2∑_η_ V^_IJ in twisted geometry parameters. Notice that the Hamiltonian vector field of edge simplicity constraint is spanned by
ψ^𝒮_IJKL=2p_[IJ(X̂_KL]-∑_L^_KL]∂ _ξ_),
where X̂_KL is the vector field generating the adjoint action of X_KL on ℚ_m labelled by 𝕍, with X_KL is the so(D+1) algebra element given by
X_KL≡ X^IJ_KL=δ^I_[Kδ^J_L]. It is easy to verify that the vector field (<ref>) only induces the transformation of holonomy on the edge simplicity constraint surface, which reads
ℒ_α^IJKLψ^𝒮_IJKLh= 1/2η_1 α^IJKLV^1_[IJτ_KL]h= 1/2η_1 α̅^KLn_1(τ̅_KLn̅)e^ξ^1τ_1n_1^-1,
where α^IJKL is an arbitrary tensor satisfying α^IJKL=α^[IJKL] and α̅^KLτ̅_KL≡α^IJKLV^1_[IJ(n^-1_1τ_KL]n_1)∈ so(D-1)_τ_1. Thus, the component n̅ is just the gauge component with respect to edge simplicity constraint. By reducing the edge simplicity constraint surface with respect to the gauge orbit generated by ψ^𝒮_IJKL, we get the simplicity reduced phase space B_es given by
B_es≡ℝ_+× S^1×𝔻_1×𝔻̃_1 ≡{(η_1,ξ_1,V_1, Ṽ_1)},
where
η_1∈ [0,+∞), ξ_1∈[-π,π), V_1∈𝔻_1, Ṽ_1∈𝔻̃_1 with 𝔻_1 and 𝔻̃_1 are defined by Eq.(<ref>).
Correspondingly, the reduced symplectic structure on B_es
gives the Poisson brackets
{p̅_X, p̅_Y}= p̅_[X,Y], {p̃̅̃_X, p̃̅̃_Y}=p̃̅̃_[X,Y], {ξ_1,η_1}=1,
where p̅_X≡1/2η_1V^1_X=1/2η_1 V^1_IJX^IJ and p̃̅̃_X≡1/2η_1Ṽ^1_X=1/2η_1Ṽ^1_IJX^IJ. Specifically, the Poisson bracket between ξ_1 and (p̅_X, p̃̅̃_X) are given by
{ξ_1, p̅_X}=L^1_X(𝕍), {ξ_1, p̃̅̃_X}=L^1_X(𝕍̃).
Notice these Poisson brackets is not independent of (V_2,..V_m) and (Ṽ_2,...,Ṽ_m), since ξ_1 contains the information of the choices of the Hopf section n and ñ which depend on 𝕍 and 𝕍̃. Recall the result of section <ref>, by using the decomposition n=n_1...n_m and ñ=ñ_1...ñ_m, one can choose the Hopf sections n and ñ to ensure that
L^1(𝕍)=L̅^1(V_1), and L^1(𝕍̃)=L̅^1(Ṽ_1).
Then, the symplectic structure on reduce phase space B_es is given by the Eqs.(<ref>), (<ref>) and (<ref>), which is identical with that given in our companion paper <cit.>. Further, the gauge reduction with respect to Gaussian constraint and the treatment of vertex simplicity constraint can be carried out following the same procedures as that in <cit.>.
§ CONCLUSION AND OUTLOOK
The realization of gauge fixing in quantum gauge reduction and the Fermion coupling in all dimensional LQG require us to construct the coherent state in the full Hilbert space which involving the non-simple representations of SO(D+1). Following previous experiences, it is reasonable to consider the generalized twisted geometry coherent state and thus it is necessary to establish the twisted geometry parametrization of the full SO(D+1) holonomy-flux phase space.
We established the generalized twisted geometry parametrization for a dense subspace of the full SO(D+1) holonomy-flux phase space. In particular, the twisted geometry parameters are adapted to the splitting of the Ashtekar connection to capture the degrees of freedom of the intrinsic and extrinsic part of the spatial geometry respectively. Moreover, the symplectic structure on the SO(D+1) holonomy-flux phase space is re-expressed based on the twisted geometry parameters.
Through studying the properties of the Hopf sections in SO(D+1) Hopf fibre bundle, we obtained the Poisson algebra among the twisted geometry parameters. Especially, the relation between the twisted geometry parametrizations for
the edge simplicity constraint surface and the dense subspace ×_e∈γT^∗_ss SO(D+1)_e are discussed. We pointed out that the twisted geometry parametrizations for ×_e∈γT^∗_ss SO(D+1)_e is equivalent to that for the edge simplicity constraint surface by carrying out the gauge reduction with respect to the edge simplicity constraint, which ensures that the treatment of the anomalous vertex simplicity constraint proposed in our companion paper <cit.> are still valid for the more general case considered in this article.
The twisted geometry parametrizations for the dense subspace ×_e∈γT^∗_ss SO(D+1)_e provides us the tool which is necessary to construct the twisted geometry coherent state in the full Hilbert space of all dimensional LQG. More explicitly, similar to the construction of twisted geometry coherent state in the solution space of edge simplicity constraint, one could decompose the heat-kernel coherent state of SO(D+1) based on the twisted geometry parametrization for ×_e∈γT^∗_ss SO(D+1)_e, and then select the terms dominated by the highest and lowest weight in each representation of SO(D+1), to form the twisted geometry coherent state in the full Hilbert space of all dimensional LQG. This will be the subject of a follow up work <cit.>.
It should be remarked that the twisted geometry parametrization of the SO(D+1) holonomy-flux phase space are also valid for general SO(D+1) Yang-Mills gauge theory. Though the “geometry” may be meaningless out of the framework of gravity theory, the twisted geometry parameters provide a new perspective to analyze the Poisson structure of the SO(D+1) holonomy-flux phase space, which could help us to understand the quantum aspects of corresponding SO(D+1) Yang-Mills gauge theory.
§ ACKNOWLEDGMENTS
This work is supported by the National Natural Science Foundation of China (NSFC) with Grants No. 12047519, No. 11775082, No. 11875006 and No. 11961131013.
unsrt
|
http://arxiv.org/abs/2307.04132v2 | 20230709090426 | Reasoning over the Behaviour of Objects in Video-Clips for Adverb-Type Recognition | [
"Amrit Diggavi Seshadri",
"Alessandra Russo"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.SC"
] |
Ultrasonic Image's Annotation Removal: A Self-supervised Noise2Noise Approach
Yuanheng Zhang,
Nan Jiang,
Zhaoheng Xie,
Junying Cao*,
Yueyang Teng*
Y. Zhang is with the College of Medicine and Biological Information Engineering, Northeastern University, China.
N. Jiang is with the Department of Ultrasound, General Hospital of Northern Theater Command, China.
Z. Xie is with the Institute of Medical Technology, Peking University, China.
J. Cao is with the Department of Ultrasound, General Hospital of Northern Theater Command, China.
Y. Teng is with the College of Medicine and Biological Information Engineering, Northeastern University, China.
J. Cao and Y. Teng contributed equally to this work.
This work is supported by the Natural Science Foundation of Liaoning Province (2022-MS-114).
This work is supported by the Key R&D Plan Projects of Liaoning Province in 2020 (Project No. 2020JH2/10300122).
August 12, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In this work, following the intuition that adverbs describing scene-sequences are best identified by reasoning over high-level concepts of object-behavior, we propose the design of a new framework that reasons over object-behaviours extracted from raw-video-clips to recognize the clip's corresponding adverb-types. Importantly, while previous works for general scene adverb-recognition assume knowledge of the clips underlying action-types, our method is directly applicable in the more general problem setting where the action-type of a video-clip is unknown. Specifically, we propose a novel pipeline that extracts human-interpretable object-behaviour-facts from raw video clips and propose novel symbolic and transformer based reasoning methods that operate over these extracted facts to identify adverb-types. Experiment results demonstrate that our proposed methods perform favourably against the previous state-of-the-art. Additionally, to support efforts in symbolic video-processing, we release two new datasets of object-behaviour-facts extracted from raw video clips - the MSR-VTT-ASP and ActivityNet-ASP datasets.
§ INTRODUCTION
In recent years, the task of recognizing the type of actions being performed in video-clips has gained much of the vision community's attention <cit.>. It is a relatively well studied problem, with practical applications in smart-home systems and robotics. Current state-of-the-art methods for action-type recognition fuse predictions from two-streams of convolutional neural networks (CNNs). One stream predicts action-type probabilities from stacked image-frames of the input video-clip while another stream predicts probabilities from stacked frames of the clip's optical-flow. The output from these two streams are fused together for inference. In particular, the Inflated 3D Convolutional Network (I3D) architecture <cit.> has demonstrated much success for action-type recognition by employing this two-stream paradigm with 3D-convolutional operations.
In contrast to action-type recognition - for which numerous architectures have been proposed, the problem of adverb-type recognition is less well explored. Adverbs further describe the nature and execution of generic action types, providing additional detail regarding intent, meaning and consequences. A device recording recipes in a kitchen for example might deem a cook in the action of “stirring" a pot slowly and completely to be performing a required and delicate step. The same action performed fast or partially on the other hand, might be of less consequence. Interestingly, adverbs can also prove useful even without any knowledge of the underlying action-type. We might for example deem all recordings that are performed slowly and completely to take precedence over partially executed work.
To our knowledge, there have been two architectures proposed to solve the task of adverb-type recognition in general-scene video clips <cit.>. However, these previous methods both assume the availability of ground-truth action-types as a prerequisite for adverb-type recognition and follow the trend set by previous action-recognition systems - encoding video clips using an I3D backbone. These practices make them unsatisfactory for two important reasons.
Firstly, ground-truth action-types are not usually known for raw video clips, making the previous methods inapplicable in many scenarios. One might attempt to compensate for this by using a pretrained action-type predictor to feed into the adverb-recognition model. However, doing so is a non-trivial and complex task - as previous methods <cit.> assume knowledge of over 100 distinct ground-truth action-type categories. Using predictions for more than 100 action-type categories invariably leads to incorrect and noisy input data - especially if the number of training samples are limited. Additionally, under such a bootstrapped framework, one would be forced to re-train both action-type and adverb-type predictors when new video-clips with new or unseen action-types emerge. Ideally, we would instead prefer to have a framework and model wherein adverb-type predictions are made without requiring any knowledge of the video-clip's action-types.
Secondly, while end-to-end black-box CNN models such as the I3D architecture have proved successful for action-type recognition, a key reason for this success has been the fact that CNNs excel at object recognition, and a video-clip's action-type is greatly constrained by the type of objects present within scenes. The same is not true for adverb-type recognition. Object-type may vary widely across different instances of the same adverb. A person cooking slowly for example presents a very different scene from a dog running slowly in a park. And while the use of optical-flow input does mitigate some of this problem by providing motion-related information, end-to-end CNN models fail to generalize well over such diverse scenes.
However, despite this added complexity of not being constrained by object-type, humans are usually able to easily identify adverb-types by reasoning over high-level concepts of object behaviour. Something happening slowly for example might be identified to mean that objects change very little between frames. Properties of other adverbs such as partially or completely are less straightforward to define, but again seem easier to identify by reasoning over higher-level concepts of object-behaviour than they are to identify by pattern-matching over diverse scenes that vary widely.
In this work, following the intuition that adverbs are best described by reasoning over higher-level concepts, we propose a novel framework that (1)Extracts discrete facts of object-behaviour from raw video clips (2)Reasons over those extracted facts to produce high-level summaries of object-behaviour (3)Predicts and aggregates adverb-types using down-stream models over those high-level summaries.
Importantly, unlike previous work for general scene adverb-recognition <cit.>, our framework does not assume any knowledge of the video-clip's action-type during training or inference - making it directly applicable to the more general problem setting wherein the action-type of a video clip is unknown. Our main contributions are summarized as follows:
* We propose the design of a novel action-free framework for adverb-type recognition in video clips - that extracts object-facts from raw video clips; reasons over those facts to learn high-level behaviour summaries; and makes predictions of adverb-types from those summaries.
* We propose a novel extraction phase for our framework that converts raw video clips to discrete Answer Set Programs (ASP) of facts - capturing information regarding objects moving within each clip. Using this new extraction phase, we release two new datasets of object-behaviour-facts - the `MSR-VTT-ASP' and the `ActivityNet-ASP' datasets.
* For the reasoning phase of our framework, we propose novel symbolic and transformer-based reasoning methods over our extracted ASP-facts to obtain higher-level summary vectors of object-behaviour.
* Finally, we evaluate the performances of the different symbolic and transformer-based-architectures that we propose within our framework, and make a comparison against the previous state-of-the-art.
Experiment results demonstrate that our new methods for adverb-type recognition perform favourably against the previous state of the art on video-clips from the MSR-VTT and ActivityNet datasets, providing a new means for adverb-type recognition when the action-type of a video-clip is unknown.
§ RELATED WORK
Action-Type Recognition: Simonyan et al. <cit.> was first to propose a two-stream 2D CNN network for action-type recognition - that employs a separate stream to process image-frames and a separate stream to process their optical flow. This two-stream method outperformed the previous method of predicting actions from features pooled across video-frame snips <cit.> by a large margin. Subsequently, 3D CNNs <cit.> were shown to outperform their 2D counterparts by better preserving temporal information across input frame sequences, and building on these ideas, the Two-Stream Inflated 3D CNN (I3D) network was proposed <cit.> - using two streams of 3D convolutional networks over stacked frames of a video clip's image frames and optical flow. This I3D model significantly outperformed the previous methods and is employed as the backbone of a number of state-of-the-art action-type recognition systems <cit.>. However, as pointed out earlier, the two-stream and 3D CNN paradigms operate end-to-end, directly over raw pixel maps of image frames or optical flow and fail to cleanly separate out and reason over individual object-behaviours across time-steps.
To reason about objects in scenes or possible next-actions, Rueda et al. <cit.> proposed the use of a Computational Causal Behaviour Model (CCBM) alongside a two-stream 3D CNN for action-type detection. While this method does involve reasoning components, it is very different from our proposed approach. While they maintain a state-space-model alongside a conventional two-stream network for better interpretability, our framework automatedly learns summary-representations of object-behaviour as a means to adverb-type recognition. Further, their state-space-model learning methodology requires the availability of predefined action precondition and effect templates. Adverb-types on the other hand are not generally associated with preconditions or post-conditions, and our system does not require any such knowledge.
Adverb-Type Recognition: Pang et al. <cit.> was first to explore the problem of adverb-type recognition in video clips, introducing the “Adverbs Describing Human Actions" (ADHA) dataset and employing a hybrid two-stream CNN along with expression detectors and human pose-estimates. However, their work addresses a problem setting different from the one that we are interested in. The ADHA dataset is focused on adverbs for human subjects, and places special focus on human pose and expression informed adverbs. We are interested in scenes comprising more general content that may not be human. Doughty et al. <cit.> scaled up the problem of adverb-type recognition to general-scene video clips, and released adverb-annotations for subsets of video-clips from the HowTo100M<cit.>, VATEX<cit.>, MSR-VTT<cit.> and ActivityNet<cit.> datasets while proposing new architectures for the task. However, as mentioned earlier, both of these prior works <cit.> assume knowledge of the video-clip's underlying action-types as a prerequisite for adverb-type recognition (with over 100 distinct ground-truth action-type categories), and they encode video-clips using an I3D backbone without attempting to reason over individual object-behaviours. In our work, we use the adverb-annotations released by Doughty et al. <cit.> for experiments on video-clips from the MSR-VTT and ActivityNet datasets - datasets for which raw video files are publicly available.
§ METHOD
Our adverb-type recognition framework (Figure <ref>) comprises three phases - an Extraction phase, a Reasoning phase and a Prediction phase. The Extraction phase extracts separate discrete, and human-interpretable object-behaviour-facts for each object detected to be of interest within the video clip. The Reasoning Phase summarizes those facts across time-steps into summary vectors for each object. The Prediction Phase makes downstream classifications, using separate SVMs to classify between
each adverb and it's antonym. Finally, as we obtain separate SVM predictions for each object detected to be of interest in a clip, we aggregate results by majority-voting.
§.§ Extraction Phase
Figure <ref> shows a depiction of our extraction pipeline. Given a raw video clip, we first employ MaskRCNN <cit.> over delayed-captures of static frames from the video clip's image sequence - considering every fifth frame of the original clip. In doing so, we avoiding processing successive frames between which very little changes. MaskRCNN gives us a collection of predicted object-types and their corresponding bounding boxes with confidence scores between 0 and 1. We ignore all detections made with a confidence score less than 0.3, and flag all detected patches with low confidence scores between 0.3 and 0.5 as `unknown' object-types[ To simplify our explorations and to reduce noise in the input data, in this work, we ignore `unknown' type object-behaviours detected by our extraction phase - leaving reasoning over those less-confident object-facts as scope for future work.]. Patches detected with a confidence score above 0.5 are recorded along with their predicted object-types.
To capture properties of motion for each of these detected object patches, we compute the pixel-wise Gunnar-Farneback optical flow <cit.> between consecutive delayed capture frames. These per-pixel optical-flow values are averaged within each detected object-bounding-box to give us a single average numeric value of optical-flow magnitude and a single average numeric value of optical-flow angle for each detected object-patch. To filter these numerous detections, we then slide a non-overlapping sliding window over the delayed capture frames (each window detection corresponding to a single time-step), and we assume that (1)Of the objects detected in a frame, only objects moving faster than the frame's average are of interest for adverb-type recognition, (2)Of those filtered cases, only objects detected consistently in at least half the frames of the sliding window can be considered important enough for adverb-recognition. It is necessary for us to make these assumptions/choices to reduce the complexity of the problem faced by subsequent phases. Automatedly learning optimal property-extractions for adverb-type recognition is scope for future work and poses a significantly more challenging task.
A consistently detected object of interest has its properties averaged across a time-step's window, and these properties are recorded as Answer Set Programming (ASP) <cit.> facts as shown in Figure <ref>, where “detected(person, 2)" means that an object of type `person' is detected at time-step 2. We also capture local temporal properties of objects such as `operation-area' and `movement-in-place' at each time step, and record the region of the frame `cell_occupancy' that the object occupies for that time-step. To simplify processing, optical-flow angles are bucketed into discrete directions north(n), north-east(ne), east(e), etc.; while numeric-values besides optical-flow magnitude are thresholded to very_small, small, medium, large and very_large. The implementation details of these extracted predicate properties are discussed in greater detail in Appendix Section 6.1.
Our video-to-ASP extraction pipeline can be used to convert raw video-clips to ASP-programs in an online-fashion. However, to simplify our training and evaluation procedures and to further research efforts in symbolic and neuro-symbolic video-processing, we instead preprocess video-clips taken from the MSR-VTT and ActivityNet datasets using our proposed pipeline in an offline-manner and release new `MSR-VTT-ASP' and `ActivityNet-ASP' datasets consisting of extracted object-behaviour-facts and releavant background knowledge over predicate properties. Details of these new datasets are discussed further in Sections <ref>.
§.§ Reasoning Phase
§.§.§ Symbolic-Based Reasoning
In this work, we first consider employing the FastLAS inductive learning method <cit.> to automatedly learn indicator rules that plausibly define adverb-types. However, automatedly learning symbolic rules that reason over more than one time-step is an extremely challenging task (owing to the large number of possible variable groundings of rules). Rather than attempting to overcome the multi-time-step challenge, in this initial exploration, we consider learning a large number of simple single-time-step rules, that compositionally might inform overall adverb-type. In particular, for magnitude, angle, and operation_area predicates, we focus on learning range-rules that define upper or lower bounds of an object's predicate-properties at single time-steps, and for the cell-occupancy predicate, we focus on learning single-time step rules that outline rough left/right or up/down placement of an object within the frame.
As an example, Figures <ref> and <ref> depict FastLAS learnt ASP range-rules that classify between categories of `adverb_A' and `antonym_A' motion for some collection of object-behaviour facts. According to these indicator rules, objects moving with an optical-flow magnitude between five and twenty at some time-step are considered to exhibit `adverb_A' behavior, while those objects moving with optical-flow magnitude outside those limits are considered to exhibit `antonym_A' behavior. These rules might not hold as universally true, however they are identified by FastLAS as plausibly explanations for some given batch of input behaviour-examples.
Similarly, from the training data, we learn indicator rules classifying between each (adverb, antonym) pair. To do so we sample small-balanced-batches of object-behaviour-facts from the training data - choosing 10 randomly sampled object-behaviours for each adverb, and 10 random sampled object-behaviours for it's antonym. We then run FastLAS separately over each balanced-batch along with common background-knowledge to obtain a large number of batch-wise plausible adverb/antonym indicator-rules over predicate-properties (such as those rules shown in Figure <ref>).
After all such single-time-step batch-plausible indicator-rules have been learnt for each adverb vs antonym task across the training data, we use those symbolic ASP rules to summarize object behaviours. Specifically, for an object's collection of behaviour-facts, we assign a 1 for an indicator-rule if that rule logically-fires for the given object's behaviour-facts, and we assign a 0 otherwise - so that from our collection of indicator-rules we obtain a vector of 0s and 1s (such as [1,1,1,0,1,1,...]) for each object-behaviour. All object-behaviours are converted in this manner for each adverb vs antonym task, and those vectors are used as rough behaviour-summaries for downstream adverb-type recognition. (Implementation details are further discussed in Appendix Section 6.3).
§.§.§ Transformer-Based Reasonoing
As an alternative to our single-time-step based symbolic-reasoning, we also propose multi-time-step transformer-based reasoning. We start by flattening the ASP-format object-behaviour properties detected by our Extraction Phase (as shown in Figure <ref>). We get rid of unnecessary syntactical detail and special characters that might otherwise confuse a sentence tokenizer, and record object-type only once per time-step to avoid redundancy. We also eliminate the explicit time-stamps (1,2,3...) associated with each logical fact. We are able to do this, provided that we maintain the correct chronological ordering of detected object-properties since transformer models already have provisions allowing them to recognize and reason over the positional-ordering of words in sentences.
Next, we consider Masked Language Modeling (MLM) <cit.> over object-behaviours to learn useful object-behaviour representations (Figure <ref>). In conventional MLM, some of the words of a natural-language sentence are masked out and a transformer model fitted with a shallow prediction-layer is trained to predict those masked words from the rest of the unmasked sentence - forcing the transformer to learn to encode sentence-structure and overall sentence meaning. Features output by the last transformer-layer are then typically extracted and used for related down-stream tasks such as text-classification. In the context of our object-behaviours problem setting, we directly extend this idea, by masking out some of the `value-words' that correspond to each object's particular behaviour (that might be a value of magnitude/angle/operation-area/etc. at some time-steps). We then train an MLM transformer model to predict those masked values from the rest of the unmasked object-behaviour (as shown in Figure <ref> [Specifically, we mask value-words with a probability of 20% and do not mask-out prompt-words such as `magnitude' and `angle' that occur in every example. Importantly, we also make sure not to mask object-types as they can be difficult to infer from object-motion, and forcing a model to predict them would detract from learning other behavioural-properties.]). In doing so, we force the transformer to learn to encode some overall meaning or dynamics of object-behaviour. The features output by the last transformer-layer are then used for down-stream adverb-type recognition.
In particular, we do not train the transformer model from scratch, but rather fine-tune a model that has been pretrained for natural-language MLM. We do this transfer-learning in order to exploit complex network reasoning properties that have already been learnt over very large datasets of natural language[Note: to limit the computational costs of fine-tuning, we truncate flat object behaviour inputs (Figure <ref>) at 512 tokens.].
Once we have fine-tuned our MLM model to reason over and unmask object-behaviours, we then use it to extract object-behaviour summary vectors. For each input object-behavior snippet in the dataset, we feed that flattened object-behavior to our trained transformer-model and extract the word-level vectors output by the transformer's final layer. Those word-level features are then averaged across the entire flattened sentence to give us a single summary vector - that encodes some overall multi-time-step object-behaviour information.
§.§ Prediction Phase
Finally, each summary object-behaviour feature vector (output by either the single-time step symbolic-reasoning approach or the multi-time step transformer-reasoning approach) is then fed into a separate Support-Vector Machine (SVM) with rbf kernel for binary classification between each adverb-type and it's antonym. At test time, the adverb-vs-antonym predictions from multiple object-behaviours detected to be of interest in a single clip are aggregated to make a single decision by a majority-vote.
§ EXPERIMENTS
Datasets: We evaluate our method on subsets of the MSR-VTT <cit.> and ActivityNet <cit.> datasets, using adverb-annotations by Doughty et al. <cit.>. We process clips where both raw-footage and adverb-annotations are available using our Extraction Phase (Section <ref>), to obtain 1309 ASP-programs for our new MSR-VTT-ASP dataset and 1751 ASP-programs for our new ActivityNet-ASP dataset - where each program contains facts of multiple object-behaviours detected to be of interest within the corresponding video-clip, along with background knowledge of predicate properties (Appendix 6.1). Each program is labeled with one or more of 22 adverb-types (11 adverb/antonym pairs) according the source clip's labels[We drop the loudly/quietly category since neither our method nor the previous work uses a clip's audio.] : (1)upwards/downwards, (2)forwards/backwards, (3) outdoor/indoor, (4) slowly/quickly, (5)gently/firmly, (6)out/in, (7) partially/completely, (8)properly/improperly, (9) periodically/continuously, (10) instantly/gradually, (11) off/on. This leaves us with 1674 unique (asp-program, adverb) pairs from MSR-VTT and 1824 unique (asp-program, adverb) pairs from ActvityNet. We randomly split these datasets into training and testing sets using 70/30 stratified splits (stratified by adverb-type) to obtain 1171 training and 503 testing samples for MSR-VTT-ASP and 1276 training and 548 testing samples for ActivityNet-ASP. Finally, with these two new ASP-datasets and splits having been created, for experiments, we turn to the requirements of our adverb-type recognition framework. We require snippets of individual object-behaviours to reason-over for adverb-type prediction. So, for each ASP-program, we cut-out behaviour snippets for separate detected object-types - so that one snippet corresponds to one object-type's behaviour over the course of a video (as shown in Figure <ref>). Each object behaviour snippet is annotated with the adverb-type of it's source program.[Note: As each video-clip contains multiple objects, the number of
object-behaviour snippets is much larger than the number of video-clips.] These snippets of object-behaviour are then repeated within each adverb-category to balance out the number of samples used for training and testing in each category.
§.§ Symbolic Based Reasoning
As mentioned in section Section <ref>, we learn a large number of batch-wise plausible indicator rules using FastLAS over balanced batches of object-behaviours from the training-set within each adverb-vs-antonym category, and use those learnt indicator-rules to extract summary vectors of object behaviors for each adverb-vs-antonym classification task. Those summary vectors from the training-set are then used to train separate SVM classifiers to distinguish between each adverb and its antonym. At test time, SVM predictions from multiple object-behaviours within individual source-video-clips from the test-set are aggregated by majority-vote to distinguish between adverbs and antonyms in each category. As shown in Figure <ref>, the accuracy of prediction of this single-time-step based symbolic method is highest for both MSR-VTT and ActivityNet datasets when distinguishing between forwards-and-backwards type adverbs - which might plausibly be inferred from a grouping of single-time-step behaviour-properties. Performance is worst (zero) for more complex adverb-types: periodically-continuously, instantly-gradually and off-on - for which no single-time-step batch-wise-plausible range rules are found. Table <ref> shows averaged accuracies across all adverb/antonym categories in each dataset.
§.§ Transformer Based Reasoning
For our experiments, we consider two light-weight versions of the landmark BERT <cit.> transformer model - namely the ALBERT <cit.> and DistilBERT <cit.> architectures. As outlined in Section <ref>, we fine-tune each pre-trained transformer model by flattening object-behaviour snippets and making them unmask randomly masked `value-words'. We then obtain behaviour summary-vectors for each object-snippet by feeding their flat representations to the trained transformers without masking and averaging word-level features output by the last hidden layer across the entire sentence. As with the symbolic-case, a separate SVM with rbf kernel is used over these extracted summary-vectors, along with majority-voting to distinguish between each adverb and it's antonym. We find that both ALBERT and DistilBERT achieve comparable average-performance, while out-performing our symbolic approach by a wide margin (Table <ref>). However, when each adverb-vs-antonym recognition task is viewed separately (Figure <ref>), results are mixed. The symbolic approach performs best in the `forward/backward' category, while one or the other of our transformer-based methods works best for other adverbs. The general superiority of the transformer approach is largely to be expected, given that it jointly reasons over multiple time-steps and multiple predicate properties, while our symbolic approach composes single time-step, single predicate properties. It can be difficult to interpret why one reasoning method outperforms another within a given category. However, it is encouraging that not all reasoning models exhibit the same performance, since we can achieve higher overall accuracy by separately using the most appropriate reasoning method for each category - as shown in Table <ref>.
§.§ Comparison with State-of-the-Art
We next make a strict comparison between our approaches and the action-dependent previous state-of-the-art <cit.>. For previous methods, we randomly flip action-type labels for 5% of action-categories in the train and test sets, so that we obtain `imperfect-actions' that represent an action-type prediction-accuracy of 95% (which one might approach if a state-of-the-art action-type predictor <cit.> is trained on full-versions of the two datasets and used for prediction). In the case of MSR-VTT-ASP, our joint Symbolic and Transformer action-free reasoning method is highly competitive, making a 3.71% improvement over PsudoAdverbs<cit.> in the imperfect-actions case and outperforming previous works even in the scenario where all ground truth actions are explicitly known (Table <ref>). In the case of ActivityNet-ASP, our joint Symbolic and Transformer action-free reasoning method achieves 1.86% lower accuracy than PsudoAdverbs in the imperfect-actions-case, but is still highly useful, as it offers an action-free alternative to previous methods at a relatively small drop in performance - with no requirement to train or maintain action-type predictors.
Broader Impact: Similar to action-type recognition methods, we reflect that one may attempt to use adverb-type recognition to maliciously interpret and monitor video-footage. However, we also note that improved adverb-type recognition, when used ethically for improved video-interpretation, offers significant benefits to human computer interaction and robotics.
Limitations and Scope for Future Work: To our knowledge, we are the first to propose action-free methods and first to reason-over object behaviours for adverb-type recognition. As such there are several possible directions for further investigation. Primarily, for transformers we explored MLM-modeling using light-weight transformer models, and our symbolic-reasoning method is limited to single-time step and single-predicate type rules. Scope for future work then includes exploring alternative transformer-modeling/architectures (such as causal modeling using GPT-3 <cit.>), and reasoning over multiple-time steps and multiple-predicate-properties using symbolic-reasoning.
§ CONCLUSION
In this work, we proposed the design of a new framework that reasons over object-behaviours to recognize a video-clip's adverb-types. Importantly, unlike previous work, our method is action-free and is directly applicable when the action-type of a video-clip is unknown. We proposed a novel pipeline to extract human-interpretable object-behaviour-facts from raw video clips and used that pipeline to create two new datasets of object-behaviour-facts - the MSR-VTT-ASP and ActivityNet-ASP datasets. Finally, we proposed novel symbolic and transformer based reasoning methods that reason over those extracted facts to distinguish between adverb/antonym types. Experiment results demonstrate that our proposed methods perform favourably against the previous state-of-the-art.
ieee_fullname
§ APPENDIX
§.§ Implementation Details of the Extraction Phase
In this section, we describe the design of our extraction-phase in greater detail. Figure <ref> shows a depiction of this pipeline.
As mentioned earlier, given a raw video clip, we first employ MaskRCNN <cit.> over delayed-captures of static frames from the video clip's image sequence - the delay is added so that we only consider every fifth frame of the original clip and avoid processing immediately successive frames (between which very little changes). MaskRCNN gives us a collection of predicted object-types and bounding boxes along with their corresponding confidence scores between 0 and 1. We ignore all detections made with a confidence score less than 0.3, and flag all detected patches with low confidence scores between 0.3 and 0.5 as `unknown' object-types. Patches detected with a confidence score above 0.5 are recorded along with their predicted object-types.
To capture properties of motion for each of these detected object patches, we compute the pixel-wise Gunnar-Farneback optical flow <cit.> between consecutive delayed capture frames. These per-pixel optical-flow values are averaged within each detected object bounding box to give us a single numeric value of optical-flow magnitude and a single numeric value of optical-flow angle for each detected object-patch.
To filter these numerous detections for the most adverb-relevant information, we make two important assumptions.
* First, we assume that of the objects detected in a scene, faster moving objects are of more interest for adverb-type recognition then slower moving objects within the same scene. This is a reasonable assumption to make since we are trying to design a system that mimics human judgement of adverb recognition in video clips, and to humans, faster moving objects are usually more eye-catching and take precedence over slower-moving objects.
* Next, we make the assumption that objects whose behaviour determine the video clip's overall adverb-type must be detected to be of interest (moving faster than other objects within the same scene) with some level of consistency. If an object is deemed to be of interest only fleetingly, then it is unlikely to determine the overall categorization of a video clip.
Acting on these two assumptions, after computing averaged optical-flow properties for each bounding box as described above, we then run a non-overlapping sliding window over the delayed capture frames and filter out objects that (1) Do not have optical-flow magnitude above the average of all objects detected in the same frame (2) Do not pass the first filtering step for at least half the delayed-capture frames encompassed by the sliding window. In our implementation, we use a sliding window of size five -within which period, the types of objects being portrayed do not usually change much.
Finally, to simplify the tracking of object behaviour between frames, we ignore duplicate object-types detected within each delayed frame and record only the object with the highest optical-flow magnitude in a contest between two or more objects of the same type. Once we have filtered our MaskRCNN detections this way, the per-frame properties of optical-flow magnitude, optical-flow angle and bounding-box size for objects are averaged across all detections of the same object-type within each window. Since scenes do not usually change much within the span of a window, these object-properties that we are averaging for a given object-type usually pertain to the same physical object.
Properties recorded from each window correspond to a separate time-step and are time-stamped as ASP facts accordingly. As shown in Figure <ref>, “detected(person, 3)" means that an object of type `person' is detected at time-step 3 - corresponding to the third window scan.
In addition to these averaged per-frame properties, we also capture local temporal properties of `operation-area' and `movement-in-place' for each object within a window. Operation-area captures the size of the area within which a single object-type `lives' for the span of a window i.e it is the product of (xmax-xmin) and (ymax-ymin) computed over all detected bounding box coordinates. Movement-in-place on the other hand is the ratio of the operation-area to average bounding-box-size of the detected object within a window. The more that an object moves around within a given window, the larger that this ratio will be. To simplify the down-stream reasoning process, all numeric properties besides optical-flow magnitude are placed into discrete buckets, such as `small', `very-small', `medium', etc. ; while angles are categorized into discrete sectors `north(n)', `north-east(ne)', `east(e)' and so on. The exact numerical-range of each discrete bucket/sector is specified in the accompanying code-implementation.
Besides the object propeties already mentioned, we also record the region of the video frame within which each object's operation-area is maximum (i.e the section of the frame within which the object `lives'). Rather than mapping each frame's area using a series of flat grid cell locations such as C_0,C_1,C_2,C_3... and assigning cell-ids to each object, in order to allow a more intuitive reasoning process, we employ a hierarchy of relative placement.
As shown in Figure <ref>, at the highest level of this hierarchy (level 0), the frame is split into 4 regions, (top, left), (top, right), (bottom, left) and (bottom, right). At the second level (level 1), each of these regions is further split into another 4 regions, and again at level 2, the process is repeated. Describing the location of an object's operation-area using this hierarchy of relative placement then allows us to make fairly simple inferences. For example, if level 1 placement stays unchanged and the level 2 toggles from left to right, we can easily determine that the object has moved right by a small amount, and if instead the level 1 placement toggles from top to bottom, then we can say that the object has moved downward by a significant amount. As shown in Figure <ref>, cell-occupancy predicates map the operation-area of detected objects in each window using this hierarchy of relative-placement.
Finally, for clarity, Figure <ref> shows a trimmed example of an ASP-program, computed from a video-clip using this pipeline. The figure shows some selected background knowledge and object-properties detected over a single time step. Importantly, we highlight that our background knowledge includes information on the `opposites' of relative directions, as well as `less-than', `clockwise' and `anticlockwise' orderings over bucketed numeric values - so that we can reason about ranges in symbolic approaches. Importantly, these ordering predicates are formulated with a measure of distance to allow for range-reasoning. For example, `clockwise(n, ne, 1)' indicates that northeast is clockwise of north by one-tick while `clockwise(n, e, 2)' indicates that east is clockwise from north by two-ticks. Similarly `less-than(very-small, small, 1)' indicates that small is larger than very-small by one step, whereas `less-than(very-small, medium, 2)' indicates that medium is larger than very-small by two steps.
§.§ MSR-VTT-ASP and ActivityNet-ASP datasets
As discussed in Section <ref>, in this work, using our extraction-phase pipeline, we process clips where both raw-footage and adverb -annotations (from Doughty et al. <cit.>) are available for the MSR-VTT<cit.> and ActivityNet<cit.> video-datasets to create the new MSR-VTT-ASP and ActivityNet-ASP datasets of ASP-programs. Each dataset contains facts of multiple-object behaviours detected to be of interest within the corresponding video clip, along with background knowledge of predicate properties - as shown in Figure <ref> (the full background-knowledge of predicate-properties are specified in the ASP-files of the new datasets).
As discussed in Section <ref>, each program is labeled with one or more of 22 adverb-types (11 adverb/antonym pairs) according the source clip's labels: (1)upwards/downwards, (2)forwards/backwards, (3) outdoor/indoor, (4) slowly/quickly, (5)gently/firmly, (6)out/in, (7) partially/completely, (8)properly/improperly, (9) periodically/continuously, (10) instantly/gradually, (11) off/on, and we split these datasets into 70/30 train/test stratified splits (stratified by adverb-type) to obtain 1171 training and 503 testing samples for MSR-VTT-ASP and 1276 training and 548 testing samples for ActivityNet-ASP.
Table <ref> shows summary properties of these two new datasets.
§.§ Implementation Details of Symbolic-Based Reasoning
As mentioned in Section <ref>, we consider employing the FastLAS inductive learning method <cit.> to automatedly learn some governing rules over these extracted facts - so as to explain the overall adverb-type categorization of each video clip.
In using FastLAS to learn such rules, we are primarily constrained by the number and type of variables that each rule can use. The more number of variables that a rule allows, the more the number of possible groundings that it can take, and the longer that it takes for rule learning to complete. And while each extracted instance of object behaviour possess multiple facts of the same predicates across different time-steps (such as magnitude at time step 1, magnitude at time step 2, etc.), owing to the large number of possible groundings, automatedly learning rules that reason over more than one time-step is especially challenging for this task.
In this work, rather than attempting to overcome these challenges and learn a few very complex rules over multiple time-steps and multiple predicate-properties, we instead consider learning a large number of simpler rules, that compositionally might inform overall adverb-type.
To limit the number of free variables that FastLAS has to deal with, we focus on learning range-rules that define upper or lower bounds of an object's predicate-properties at single time-steps. To illustrate this idea, Figure <ref> shows a toy example that we feed into FastLAS. The example specifies the behavior four objects: a car, a plane, a person, and a cat. Each of these behaviors has a corresponding optical flow magnitude for an arbitrary time-step and each object-behaviour (that is a positive example for our rule-learning problem) is also associated with a particular class type - either `strange' or `not-strange'. The first class-type mentioned in a #pos header in the figure is the one that we wish to associate with the object-behaviour. The second class type mentioned in the header is what the object-behaviour is not.
As shown in this toy example, in order to reduce the number of variable-values that FastLAS deals with, we also use discretized versions of optical-flow magnitude such as `five-to-ten', `ten-to-fifteen', etc.
The language-bias shown in the figure specifies that the head atom of any learnt rule must be a class type (`strange' or `not-strange'), and must generalize over variable objects-types. The language-bias also specifies that body atoms (if used) must capture some range property over magnitude. As FastLAS can learn to use one or none of each of the specified `less-than' body atoms, a learnt-rule might enforce an upper bound on magnitude value, a lower bound on magnitude value, or neither. Additionally, as the `number-of-steps' field in the less-than predicate is specified as a FastLAS numeric-variable (num-var), FastLAS is allowed to learn numeric-constraints that further explain range-rules.
For this particular toy example, we can explain all of the provided object-behaviours by deeming magnitudes between 5 and 20 to correspond to the class `strange' and other magnitude values to correspond to the class `not-strange'. FastLAS does infact discover such corresponding rules, as shown in Figure <ref>. Figure <ref> shows a depiction of these learnt ranges for better clarity.
We can similarly employ these range-style language-biases for other predicate properties such as optical-flow angle, and operation area. Figure <ref> shows how we might do so. As also shown in Figure <ref>, for cell-occupancy we consider using a slightly different language bias - we allow for rule-body conditions that consist of: (A)A variable relative direction along the vertical (top/bottom) and a constant horizontal direction (left/right), (B)A variable direction along the horizontal (left/right) and a constant vertical direction (top/bottom) or (C)Both horizontal and vertical relative directions specified as constants. We also use a numeric-variable (num-var) value for the level of cell-hierarchy used by a rule, so that we can learn rules that apply to different levels.
An important facet of this type of observational-predicate rule learning is that it specifically requires numeric rule learning (either to learn constraints over the number-of-steps range property or the level-of-hierarchy as described), and we have chosen to employ FastLAS as it is the only framework that allows this type of automated numeric-rule learning over ASP programs.
Generalizing our toy example for the more complex problem-setting of recognizing adverb-types from object-behaviours is straightforward. We use recorded object behaviours from our video-to-ASP pipeline as positive examples in the rule-learning setup, wherein the class associated with each object-behaviour is the ground-truth adverb-type of the overall video clip that an object hails from. Naturally, the class not to be associated with each object behaviour is the antonym of its adverb-type. Figure <ref> shows a truncated example of a detected object's behaviour formatted for FastLAS rule-learning. To simplify our explorations and to reduce noise in the training data, we ignore `unknown' object-behaviours detected by our pipeline.
However, problems arise in using this learning methodology as-is over sets of object-behaviours for given (adverb, antonym) pairs. Firstly if the set of object-behaviours is not-balanced to have an equal number of objects for both adverb and antonym, then we might get best coverage by just predicting a single class rule without any body conditions. This problem of unbalanced data is easily solved by repeating object-behavior examples in the training data to balance out the adverb/antonym classes. The next problem is more important - since the adverb-recognition setting is quite noisy (with many of our detected object-behaviours not necessarily impacting a video's overall adverb-type), for large sets of object-behaviour, even after balancing, we find that rules learnt from our simple language biases (Figure <ref>) are not able to cover more than 50% of behaviours. Then, directly predicting one or another adverb/antonym head with no body conditions again becomes the best strategy for maximum coverage.
As mentioned earlier, increasing the complexity of possible rules to abate this problem comes with its own set of challenges (namely rule-learning can slow down to the point of becoming computationally impractical). So, rather than increasing the complexity of our language biases, we consider smaller-batches of balanced subsets of the training data - which pose a less noisy and less complex problem setting when each batch is viewed separately. We run FastLAS separately over each balanced-batch of object behaviours to get batch-wise plausible rules for each of our language-biases. When the rules returned by FastLAS for a batch and language bias are non-trivial (possess body-conditions), we then record them as indicators of adverb-type.
After all such indicator-rules have been extracted from a stream of balanced-batches of the full training set for each (adverb, antonym) pair using our language-biases (Figure <ref>), we then consider composing their results together using Support Vector Machines (SVMs). Specifically, given an input object-behaviour and an adverb vs antonym task, we assign a 1 to each corresponding indicator-rule if the rule fires for the given object's behaviour, and assign a 0 otherwise - so that from our collection of indicator-rules of the adverb/antonym task, we obtain for the object-behaviour a feature of 1s and 0s (such as [1,1,1,0,1,1,...]). The entire balanced training-set is converted in this manner for each adverb vs antonym task. A separate binary-SVM with rbf kernel is trained over these extracted features to classify between every adverb and its antonym.
At inference time, a raw video clip is converted to an ASP program of object behaviors, and all the indicator-rules are checked to obtain vectors of zeros and ones for each object. All the SVMs make their adverb/antonym predictions (over features from their corresponding indicator-rules) for each detected object, and predictions from multiple objects detected within a single-clip are aggregated by a simple voting mechanism in each adverb vs antonym category.
§.§ Compute Requirements for Experiments
All experiments presented in this work can be reproduced using a single P5000 GPU device. Using this resource, ASP-Program facts were extracted for the full MSR-VTT-ASP and ActivityNet-ASP datasets sequentially over 2 days, while transformer-based finetuning completes in under 1 hr for a given dataset for both DistilBERT and ALBERT architecures. Learning rules from balanced-batches of object-behaviours using FastLAS and a single CPU requires roughly 20 hours to complete for each dataset. All SVM training and inference completes in under 5 min.
|
http://arxiv.org/abs/2307.04157v2 | 20230709121343 | DIFF-NST: Diffusion Interleaving For deFormable Neural Style Transfer | [
"Dan Ruta",
"Gemma Canet Tarrés",
"Andrew Gilbert",
"Eli Shechtman",
"Nicholas Kolkin",
"John Collomosse"
] | cs.CV | [
"cs.CV"
] |
A threshold model of plastic waste fragmentation: new insights into the distribution of microplastics in the ocean and its evolution over time
Pascale Fabre
August 12, 2023
==============================================================================================================================================
A threshold model of plastic waste fragmentation: new insights into the distribution of microplastics in the ocean and its evolution over time
Pascale Fabre
August 12, 2023
==============================================================================================================================================
< g r a p h i c s >
Figure 1. Deformable style transfer using DIFF-NST, compared to baselines: NNST <cit.>, CAST <cit.>, NeAT <cit.>, and PARASOL <cit.>. Our DIFF-NST method performs style transfer with much stronger style-based form alteration - matching the shapes and structures to those in the style image, not just the colors and textures. More in Fig <ref>. Zoom for details.
Neural Style Transfer (NST) is the field of study applying neural techniques to modify the artistic appearance of a content image to match the style of a reference style image.
Traditionally, NST methods have focused on texture-based image edits, affecting mostly low level information and keeping most image structures the same. However, style-based deformation of the content is desirable for some styles, especially in cases where the style is abstract or the primary concept of the style is in its deformed rendition of some content.
With the recent introduction of diffusion models, such as Stable Diffusion, we can access far more powerful image generation techniques, enabling new possibilities.
In our work, we propose using this new class of models to perform style transfer while enabling deformable style transfer, an elusive capability in previous models.
We show how leveraging the priors of these models can expose new artistic controls at inference time, and we document our findings in exploring this new direction for the field of style transfer.
§ INTRODUCTION
Neural Style Transfer (NST) aims at re-rendering the content of one image with the distinctive visual appearance of a second style image, typically an artwork. Most prior work has focused on low level style, represented as colors and textures. However, artistic style covers a broader gamut of visual properties, including purposeful geometric alterations to the depicted content, often called form <cit.>.
We introduce a novel NST approach that considers not only low level color and texture changes but also higher level style-based geometric alterations to the depicted content. We aim to maintain the object structure to resemble the original content image and remain identifiable as such. But with style-based deformations of the content reflecting the artist's original intent as they depicted their original subject matter in the exemplar artwork image. Such content deformations have been more challenging to achieve, given a need for a higher level spatial semantic understanding of subject and/or scene information <cit.>.
Learning priors regarding the interplay of artistic style, semantics, and intentional deviations from photo-realistic geometry is non-trivial and not generally a part of NST pipelines. However, recent diffusion-based image generation literature has made impressive progress in modeling various visual concepts <cit.>, accurately modeling how objects fit into the world around them.
We leverage these extensively learned priors in our work, adapting them to NST. We adapt them in our DIFF-NST model to function without text prompts in an exemplar-based setting, similar to more traditional NST. Text-less exemplar-based is desirable for some stylistic edits, as textual prompts would require extensive descriptions of the style, which may be difficult or impossible to articulate fully. We build the first NST model to make significant high level edits to content images. We compare our work to several baselines and show state-of-the-art user preference in user studies.
§ RELATED WORK
The seminal work of Gatys' Neural Style Transfer (NST) <cit.> enabled neural techniques for transferring the artistic style appearance of a reference artwork to an unstylized depiction of some content - typically a photograph. Follow-up works created feed-forward, optimization free approaches to achieve this <cit.>. Other techniques for NST emerged, such as optimal transport <cit.>, hyper-networks <cit.>, and Neural Neighbours <cit.>. Attention based techniques later emerged <cit.>, with further follow-up improvements to contrastive losses <cit.>, and scaling to high resolution with improvements to robustness and detail propagation <cit.>. Deformation in style transfer has been explored in previous work <cit.>, based on detecting shared keypoints between the style and content, thereby limited by a shared depicted subject. Regarding fine-grained representation space for artistic style, ALADIN <cit.> introduced the first solution to this training over their fine-grained BAM-FG dataset. This was later evolved into ALADIN-ViT <cit.> using a Vision Transformer <cit.> for stronger expressivity, and later as ALADIN-NST <cit.>, with stronger disentanglement between content and style by changing BAM-FG <cit.> for a fully disentangled, synthetic dataset.
Within the generative image domain, sizeable text-to-image diffusion models such as Dall-e 2 <cit.>, Parti <cit.>, Imagen <cit.>, and e-Diffi <cit.> have recently made significant advances in image generation fidelity and control, enabling free-form text prompts as an input control vector for guiding image synthesis, with unprecedented quality. These models are trained on large datasets and require prohibitive amounts of computation. Latent Diffusion Models <cit.> introduced the concept of applying the diffusion process to a smaller, latent representation of images rather than operating in pixel space like the previous works. This dramatically reduces the compute requirements for training and, more importantly, inference. Stability AI <cit.> democratized comprehensive open access to such models by open sourcing weights for an LDM trained on a subset of the LAION <cit.> dataset.
Much follow-up research has been enabled and built on these pre-trained weights, known as the Stable Diffusion model. Due to the still prohibitive training costs, several works have studied the personalization of existing pre-trained model weights for new concepts, such as Dreambooth <cit.>, Textual Inversion <cit.>, and Custom Diffusion <cit.>. Other works have studied enabling new ways to control these models for tasks such as subject-oriented editing <cit.>. Or focusing on more general image editing based on text-based prompt changes <cit.>. However, most of these techniques aim at semantic changes or require text-based prompt changes. Text-less exemplar-based stylistic edits have not commonly been explicitly explored with diffusion models. Recently, PARASOL <cit.> has used an ALADIN-ViT style embedding to perform style-based image generation, with some capabilities of maintaining content structure.
§ METHOD
To push beyond the traditional boundaries of texture-only style transfer, we wish to leverage the significant learned model priors such as Stable Diffusion <cit.>, having been trained on large amounts of data, with typically inaccessible amounts of compute. In our approach, as shown in Figure <ref> we freeze the pre-trained weights and train several modules of fully connected layers in each UNet self-attention block. We interleave pre-extracted content noise used for shapes and composition and the style attention values from the style image. These are used across reverse diffusion timesteps, generating a final stylized image using content and style information extracted from the interleaved data.
§.§ Preliminary analysis of style information in attention space
Prior work <cit.> has shown that early diffusion timesteps affect an image's global structural and compositional information, whereas later timesteps affect local fine details. Inspired by this, we set out to determine which timesteps of the diffusion process control style and which control content.
Given a lack of research around exemplar-based Neural Style Transfer with diffusion models, we use a prompt-based model, prompt-to-prompt <cit.>, to carry out this visualization. We use ChatGPT <cit.> to generate 20 content prompts, and we further define 10 style modifier prompts. With the prompt-to-prompt pipeline (operating over the Stable Diffusion LDM weights), we use the content prompts to generate reference content images, and we combine each content prompt with each style modifier prompt to re-generate the content images with the different explicitly defined styles still using prompt-to-prompt <cit.>. At the end of the process, we have 20 reference content example images and 200 "stylized" images. During the generation process, we extract attention values for analysis. We average the differences between the content example images' attention values and each of their 10 stylized variants', at each timestep. Fig 1 in the supplementary materials visualizes the average differences between these attention values at the diffusion timesteps. The red indicates a larger difference between the original content image and its stylized versions. Given that the structural and compositional information of the example content and their "stylized" counterparts is similar, we can infer that the stylistic differences relate to the higher attention discrepancies found at the later timesteps. This preliminary exploratory experiment clarifies the different effects of diffusion timesteps across the LDM generation process.
An additional preliminary experiment using these prompt-to-prompt images is an analysis of where the style information is captured in the LDM activations. We explicitly focus on the attention mechanism, where 𝒬, 𝒦, and 𝒱 values are used in the attention process <cit.>. We generate a base non-stylized image with the content prompt and then stylized variants with style modifier prompts. We extract attention values from the content-only prompt generation and replace the attention values of the stylized generation with those from the content-only generation. Doing so re-generates the original, non-stylized image. However, in our analysis, we observe that interpolating between the 𝒱 self-attention values of the content/style-modified generations (while using only the original content values for the rest) can provide control over the stylization strength. From this experiment, we can infer that most, if not all, style information is captured from just the 𝒱 self-attention values in the LDM. We visualize examples of this style interpolation in the supplementary materials.
§.§ DIFF-NST real image inversion
Our work aims to perform style transfer of existing real user-provided images. As such, the re-styled synthesized image must stay faithful to the provided content image in terms of overall composition and structure. This means we must edit the image rather than re-generate a semantically similar approximation. We invert the content image through the LDM, similar to previous works such as prompt-to-prompt <cit.> and diffusion disentanglement <cit.>. This inversion process extracts the predicted noise at each timestep, as predicted by the UNet modules. To reconstruct the same image using an LDM, this content noise can be injected into the reverse diffusion process, replacing the LDM noise predictions at multiple timesteps. The more timesteps the noises are applied to, the better the reconstruction fidelity, with less freedom of input from the LDM. As shown in the diffusion disentanglement work <cit.>, applying changes to the diffusion values from an earlier timestep allows more significant change in image structure.
Similar to these previous works, we use 50 time steps for the forward (inversion) and reverse (re-generation) diffusion processes. However, unlike these previous works, we interleave this noise starting from an earlier time, step 5, rather than 16, to improve reconstruction quality. We apply noise until step 45 instead of 50 to allow the model to self-correct some artifacts. Also, unlike prior work, we do not set the LDM predicted noises to zero for timesteps where pre-extracted content noises are not injected into the diffusion process. We aim to allow the model to generate new details to leverage its learned priors.
A notable trait of image-to-image and image-inversion with diffusion models is that color information is not disentangled from overall image structure across timesteps, as it is with feature activation across layers of a VGG model, for example. Thus, color information must be explicitly handled before inversion. Similar to previous works <cit.>, we pre-adjust the color of the content image through mean and covariance matching. We do this dynamically during training before inversion.
A final consideration is that we aim to perform prompt-less execution of LDMs, given our use of exemplar images for both content and style. As such, we only need to use the model's unconditional capabilities. Latent Diffusion Models execute two iterations of their model: one with no prompt conditioning and one with prompt conditioning. The output of both branches is joined at every time step via the classifier free guidance (CFG). This exposes prompt control via this adjustable strength. Given that we aim not to use any text prompts anywhere in the process, we, therefore, altogether disable the prompt-conditioned branch of the model execution and use only the un-conditional branch for both inversion and reverse diffusion. The process would function the same if the text prompt were fixed to a generic prompt throughout or if CFG was zero, but this approach saves on compute.
§.§ Attention manipulation
We train a set of MLPs across each self-attention module in the LDM UNet blocks. We do not wish to re-train or fine-tune the LDM weights due to large compute/financial requirements. Instead, we train several smaller modules to hijack part of the LDM process, similar to how content noises are injected into the diffusion process. We directly target the attention process's 𝒱 values, generating brand new values for the remaining process to use. We chose the 𝒱 values following our initial exploratory experiments with existing text-prompt-based diffusion image editing techniques such as prompt-to-prompt, where we observed that interpolation between 𝒱 values only is enough to induce stylistic changes between content prompts and style-modified prompts.
Before our reverse diffusion process, similar to the real content image inversion to collect the noise predictions for reconstruction, we additionally invert and fully reconstruct the real style image through the LDM. This time, instead of collecting the predicted noises, we collect the predicted attention 𝒱 values at every location and timestep and interleave them into the reverse diffusion process. Here, the MLPs generate the new 𝒱 values based on an input consisting of the current 𝒱 values, the corresponding 𝒱 values at the same location and timestep of the style image, and the ALADIN style code of the style image, which we also pre-extract. We use both the style attention values and ALADIN, as this provides both global and local style information. Using only the attention values induces a similar style transfer. Anecdotally, however, using both sources of style information leads to a higher overall perceived quality of style transfer. We use the more recent ALADIN-NST <cit.> variant of ALADIN, as it is more disentangled, capturing less content information. This helps to avoid semantic content creeping into the stylized image from the style image, as shown in Fig <ref>.
A final consideration is that we only apply this attention manipulation process to the UNet decoder/upscaling layers, as per ControlNet <cit.>. Similar to their findings, we notice no perceivable differences in the output quality, but the VRAM consumption and compute costs are lower.
§.§ Training process
Diffusion models are typically trained one random timestep at a time, given the nature of focusing the training on noise predictions at individual timesteps. In our case, however, such timestep-localized deltas are not as easy to isolate. We can only guide our model during training based on the final de-noised output image. Moreover, well known existing style losses have been designed to operate in pixel space. They are, therefore, not directly applicable to latent space - though this may be an area of potential future study.
Therefore, we build our training process around unrolling the entire diffusion process, from starting to ending timesteps. We then decode the latent values into pixel space, where we can finally apply standard NST losses amongst the stylized and real style images from our style dataset. We opt to keep these style learning losses similar to previous works to reduce variables and uncertainty from our work. We follow a similar training objective to recent works such as NeAT <cit.>, ContraAST <cit.>, and CAST <cit.> - described in detail in Sec <ref>. We can report some negative results in using the LDM UNet as a noised feature extractor for computing a VGG-like style loss to avoid the unrolling process - the features extracted by the UNet did not accurately model the image style features.
§.§ Training objective
We train our model using well explored training objectives from traditional NST methods to focus solely on the model technique - we most similarly follow training objectives resembling those of NeAT <cit.>, ContraAST <cit.>, and CAST <cit.>. Between style and stylized images, we use a VGG <cit.> style loss (Eq. <ref>), identity loss (Eq. <ref>), contrastive loss (Eq. <ref>), sobel-guided patch discriminator (Eq. <ref>), domain-level discriminator (Eq. <ref>), and ALADIN loss (Eq. <ref>). Between the stylized and content images, we use a perceptual loss (Eq. <ref>), contrastive loss (Eq. <ref>), and identity loss (Eq. <ref>). We use Sobel guidance for the patch discriminator, as per NeAT.
Equation <ref> shows the VGG style loss, with μ and σ representing the mean and standard deviation of extracted feature maps, I_s represents style image from the style dataset S, I_c represents a content image from the content dataset C after the color adjustments, and I_sc represents the stylized image.
ℒ_s := λ_vgg( ∑_i=1^Lμ(ϕ_i(I_s c))-μ(ϕ_i(I_s))_2+σ(ϕ_i(I_s c))-
σ(ϕ_i(I_s))_2 )
Eq <ref> represents the domain-level adversarial loss, as per ContraAST <cit.>, learning to discriminate between generated stylized images and real artworks. Here, a discriminator 𝒟 operates over the stylized image, following our model M modules. Eq <ref> details standard perceptual loss, where ϕ_i represents the pre-trained VGG-19 layer index.
ℒ_a d v := λ_adv( I_s ∼ S𝔼[log(𝒟(I_s))]+ I_c ∼ C, I_s ∼ S𝔼[log(1-𝒟( M ( I_s, I_c) ))] )
ℒ_percep := λ_percep( ϕ_conv4_2 (I_s c)-ϕ_conv4_2(I_c)_2 )
Eqs <ref> and <ref> show MSE identity losses between the reconstructed images and the style or content images, respectively. Eq <ref> shows the ALADIN loss, with 𝒜 representing the ALADIN model.
ℒ_id_s := λ_identity(I_s s-I_s_2)
ℒ_id_c :=λ_identity(I_c c-I_c_2)
ℒ_aladin:=λ_aladin(𝒜(I_s c) - 𝒜(I_s) _2)
Eqs <ref> and <ref> show contrastive losses as detailed in Sec 4.1, similar to <cit.> and <cit.>, where l_s and l_c are extracted style/content embeddings respectively, using a projection head, and τ is the temperature hyper-parameter. The contrastive losses are applied over the averaged attention values per timestep.
ℒ_s_contra := λ_c( -log(exp(l_s(s_i c_j)^T l_s(s_i c_x) / τ)/exp(l_s(s_i c_j)^T l_s(s_i c_x) / τ)+∑exp(l_s(s_i c_j)^T l_s(s_m c_n) / τ)) )
ℒ_c_contra := λ_c( -log(exp(l_c(s_i c_j)^T l_c(s_y c_j) / τ)/exp(l_c(s_i c_j)^T l_c(s_y c_j) / τ)+∑exp(l_c(s_i c_j)^T l_c(s_m c_n) / τ)) )
The ℒ_p term defined in Eq <ref> is our patch discriminator D_patch loss, guided by Sobel Maps (SM).
ℒ_p = λ_patch( I_s ∼ S𝔼[-log (D_patch (crop ( I_s c, SM_s c ), crops(I_s, SM_s ) ) ) ] )
Our final combined loss objective is shown in <ref> where each term is weighted by their respective λ term. The loss weights are as follows: λ_vgg = 0.5, λ_adv = 5, λ_percep = 6, λ_identity = 100, λ_aladin = 10, λ_c = 1, λ_patch = 10, λ_1 = 0.25, λ_2 = 0.75.
ℒ_final := ℒ_s + ℒ_adv + ℒ_percep + ℒ_id_s + ℒ_id_c + ℒ_aladin + ℒ_s_contra + ℒ_c_contra + λ_1 ℒ_p_simple + λ_2 ℒ_p_complex
§ EXPERIMENTS AND EVALUATION
Neural style transfer using diffusion models is a nascent sub-field of research. As such, very few works study this new direction, much less via prompt-less techniques. Despite not being a strictly NST model, PARASOL <cit.> is currently the only suitable method we can baseline against. We additionally compare against three recent "traditional" NST techniques, NNST <cit.>, NeAT <cit.>, and CAST <cit.>. These techniques have focused on texture-based style transfer, and as such, their stylized outputs contain a much better match between the style and stylized images' textures. This is reflected in metrics such as SIFID <cit.>, used in NST literature so far that precisely measure such correlations.
The unrolled approach of training diffusion models does incur a high computation cost. Our technique can train over an LDM at 512px resolution on a GPU with 48GB VRAM at batch size 1. We use gradient accumulation 8 to raise the effective batch size to 8. Inference at 512px fits on 24GB VRAM. We train our model for 3 weeks on a single A100. Like NeAT <cit.>, we use the BBST-4M dataset they introduce, due to its great variety of style data, covering not just fine-art imagery as more commonly found in other datasets. Due to our method and NeAT having been trained using BBST-4M, we aim to use a test set with no overlap with training data. We use the test set from ALADIN-NST <cit.>, which was collected as a test set not overlapping with previous datasets such as BBST-4M. The test set contains 100 content and 400 style images, resulting in 40,000 stylized images. We collect quantitative metrics in Table 1, measuring SIFID <cit.> and Chamfer for style and color consistency with the style image respectively, and LPIPS <cit.> for structure consistency with the content. Due to long-running generation times for our method and those of multiple baselines, we randomly sub-sample and use 5,000 images.
[c]0.385
tableQuantitative metrics. Lower is better. ↓
width=0.9
Model LPIPS ↓ SIFID ↓ Chamfer ↓
NeAT <cit.> 0.624 0.880 24.970
CAST <cit.> 0.632 1.520 43.864
NNST <cit.> 0.633 2.007 53.328
PARASOL <cit.> 0.716 3.297 105.371
DIFF-NST (Ours) 0.656 2.026 45.777
[c]0.629
tableUser studies for our model, for individual ratings (out of 5), and 5-way preferences (%). Higher is better. ↑
width=1
Model Content Rating ↑ Style Rating ↑ Content Preference ↑ Style Preference ↑
NeAT <cit.> 3.271 2.952 32.222 26.000
CAST <cit.> 3.031 2.863 16.756 16.133
NNST <cit.> 2.937 2.712 21.200 17.778
PARASOL <cit.> 2.301 2.257 12.400 9.556
DIFF-NST (Ours) 2.751 2.973 17.422 30.533
We present a qualitative random sample of stylizations in Fig <ref> and the supplementary materials. We visualize stylizations using our method, the closest technically related work PARASOL <cit.>, and some traditional NST techniques.
The most impactful ablation to report on is experimenting with the style embedding used alongside the style attention values. We show some comparative examples in Fig <ref>, having tested the regular ALADIN-ViT style embedding and the more disentangled ALADIN-NST variant. The ViT variant introduces some content features from the style image into the stylized image when these features have strong activations - most commonly occurring with faces. Though rare, we mitigate this issue using a fully disentangled style embedding, ALADIN-NST.
§.§ User studies
We undertake a pair of user studies to gauge real life human preference amongst our method and the baselines. First, we carry out an individual rating exercise, measuring the content fidelity between the content image and the stylized image, and separately measuring the style consistence compared to the style image. Second, we carry out a 5-way comparison, where we ask workers to select their best preference from randomly shuffled samples. We bin the ratings in the individual exercise to five levels, and we explicitly instruct what each rating level should represent. We include the definitions in the supplementary material. We randomly sub-sample 750 stylized samples from the test set and compare our method against each baseline on Amazon Mechanical Turk (AMT). We collect and average our responses over 5 different workers for each comparison, and show our results in Table 2.
The results indicate that workers are scoring our DIFF-NST method low on the content information, in both the ratings and preference studies. This is a positive result, as it highlights our technique's more substantial content deformation. The only model which scored lower is PARASOL. However, as seen in our visual comparison figures, PARASOL tends to make significant conceptual changes to the depicted content. It is not so much a technique for style transfer as it is for style-inspired re-generation of similar semantic content. The results for our style-focused experiments indicate that workers prefer our method to baselines in both individual ratings and 5-way preference studies, which signifies a successful transfer of style while still deforming the content.
§.§ Inference controls
One key strength of our diffusion-based NST method is control over the structural deformity in the represented content concerning the style image. The reference content information is injected into the diffusion process by applying noises at each time step, pre-extracted from the content image inversion. With diffusion models, the early time steps strongly affect the significant structural components of the image, whereas the later timesteps affect lower level textural information. Therefore, by varying the starting timestep at which these pre-extracted content noises are applied, we can adjust, at inference time, how much the style should deform the content structure. This effect is difficult to evaluate quantitatively, but we show two examples in Fig <ref>.
An alternative vector of inference-time control is varying the diffusion timesteps in which our method's attention replacement happens. By stopping at earlier timesteps, less style information is injected into the diffusion process, reducing the stylization strength. Unlike reducing content noise injection, this approach maintains the content structure better and more directly targets the style properties instead of structure. We show examples of this second approach in Fig <ref>, using the same example images as in Fig <ref> for clarity.
§ LIMITATIONS AND CONCLUSIONS
One limiting factor of our approach is that textures are not matched to the style image with as much detail and fidelity as traditional NST approaches. This can, however, be alleviated by introducing a conventional NST approach into the pipeline as a post-processing step.
Though rare, due to the one-to-one mapping between the content and style attention values, some structure from some style images sometimes creeps into the stylized image. We can report negative results experimenting with Neural Neighbours <cit.> in attention space, which resolved this issue, but only at the cost of worse overall stylization quality. This is an area of potential future improvement.
One of the principal challenges with our method has been computation due to the unrolled nature of the reverse diffusion process during training. Future work can explore the adaptation of the style training objective to the latent space instead of pixel space, enabling non-unrolled training.
§ BROADER IMPACT
Neural techniques for artistic image editing and generation offer new tools and capabilities for skilled artists to take their work further than before. However, this does make the field easier to enter as a novice. As such, existing novice-level artists may find more competition in this space, reducing work opportunities. As digital art emerged, it offered new capabilities to artists with new tools at the detriment of some artists using physical mediums. Neural techniques can similarly open up new genres of art while reducing some opportunities for some existing digital artists.
plainnat
§ PROMPT-TO-PROMPT ANALYSIS
The base content captions partially generated using ChatGPT for the prompt-to-prompt analysis experiments are:
* A squirrel eating a burger
* A hamster on a skateboard
* A toy next to a flower
* A car driving down the road
* A giraffe in a chair
* A bear wearing sunglasses
* An octopus in a space suit
* A hedgehog getting a haircut
* A sloth running a marathon
* A cat posing like napoleon
* A dog with a beard, smoking a cigar
* A bee flying underwater next to fish
* A fish with a hat, playing a guitar
* A bird with a bowtie, playing a saxophone
* A turtle with a top hat, playing a piano
* A frog with a cowboy hat, playing a banjo
* A mouse with a sombrero, playing a trumpet
* A snake with a beret, playing a violin
* A rabbit with a fedora, playing a cello
* A squirrel with a baseball cap, playing a drum
The style modifiers are:
* A van gogh painting of
* A graphite sketch of
* A neon colourful pastel of
* A minimal flat vector art illustration of
* A watercolour painting of
* A psychedelic inverted painting of
* A pop-art comic book panel of
* A neoclassical painting of
* A cubist abstract painting of
* A surreal dark horror painting of
We visualize results from the preliminary prompt-to-prompt analysis experiments, in Fig <ref>. The figure shows the first content prompt for the base content, with the subsequent rows interpolating towards style-modified prompts using style prompt modifiers 1, 4, 2, and 8. Although not directly relevant to our study, it was also interesting to note that the stylization strength could be pushed beyond the default strength by pushing the interpolation into over-drive, similar to the technique presented in NeAT <cit.>.
§ ADDITIONAL DETAILS ON USER STUDIES
We carried out two user studies: an individual rating exercise with defined rating levels, and a 5-way preference comparative exercise. For each, we executed the experiments once for the content, and once for the style.
Our content-focused rating exercise asks the following question: "A photo has been re-generated with a different style. Please rate the structure details of the new image, 1 to 5 as follows:", where we next define the expected judgement criteria for each rating level as follows:
* The structure is different
* The structure slightly resembles the photo
* The structure mostly resembles the photo
* The structure is the same
* The structure is the same, including small details
Our style focused rating exercise asks the following question: "A photo has been transformed into the style of the artwork. Please rate the quality of the style, 1 to 5 as follows:", where the rating definitions are:
* The style is not recognisable
* The style is recognisable
* The colours match
* The textures match
* The shapes match
The 5-way comparative study presents the following question for the content-focused experiment: "A photo has been re-generated with a different style in 5 ways. Please select the highest quality reconstruction of the photo's structure details", and the following for the style-focused experiment: "A photo has been re-generated with a different style in 5 ways. Please select the most similar artistic style to the artwork"
The workers were fairly compensated. We used 5 different workers for each stylized image, for each question.
|
http://arxiv.org/abs/2307.06219v1 | 20230712150734 | Double magnetic transitions and exotic field induced phase in the triangular lattice antiferromagnets Sr$_3$Co(Nb,Ta)$_2$O$_9$ | [
"Surender Lal",
"Sebin J Sebastian",
"S. S. Islam",
"M. P. Saravanan",
"M. Uhlarz",
"Y. Skourski",
"R. Nath"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.mtrl-sci"
] |
update
|
http://arxiv.org/abs/2307.07412v1 | 20230714154143 | HuCurl: Human-induced Curriculum Discovery | [
"Mohamed Elgaar",
"Hadi Amiri"
] | cs.LG | [
"cs.LG",
"cs.CL"
] |
An Embedded Auto-Calibrated Offset Current Compensation Technique for PPG/fNIRS System
Sadan Saquib Khan, Sumit Kumar, Benish Jan, Laxmeesha Somappa, and Shahid Malik
Sadan Saquib Khan, Sumit Kumar, Benish Jan, and Shahid Malik are with the Centre for Sensors, Instrumentation and Cyber Physical System Engineering (SeNSE), Indian Institute of Technology Delhi (IIT Delhi).
Laxmeesha Somappa is with the Department of Electrical Engineering, Indian Institute of Technology Bombay (IIT Bombay).
Manuscript received Month Day, Year; revised Month Day, Year.
August 12, 2023
==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We introduce the problem of curriculum discovery and describe a curriculum learning framework capable of discovering effective curricula in a curriculum space based on prior knowledge about sample difficulty.
Using annotation entropy and loss as measures of difficulty, we show that
(i): the top-performing discovered curricula for a given model and dataset are often non-monotonic as opposed to monotonic curricula in existing literature,
(ii): the prevailing easy-to-hard or hard-to-easy transition curricula are often at the risk of underperforming, and
(iii): the curricula discovered for smaller datasets and models perform well on larger datasets and models respectively.
The proposed framework encompasses some of the existing curriculum learning approaches and can discover curricula that outperform them across several NLP tasks.
§ INTRODUCTION
Annotation information has been extensively used by previous research in NLP to devise strategies for
further data collection <cit.>,
model improvement and annotation analysis <cit.>,
pruning and weighting samples for better learning <cit.>, or
efficient use of monetary funds <cit.>.
Recent studies show consistent positive correlation between difficulty of samples to the model and their level of human agreement <cit.>. Building on these findings, we aim to utilize such prior knowledge about sample difficulty to develop a curriculum learning (CL) framework that is capable of discovering effective curricula for NLP tasks.
A curriculum is a planned sequence of learning materials and an effective one can improve training of NLP systems <cit.>.
CL seeks to improve model generalizability by ordering samples for training based on their latent difficulty <cit.>.
Recent work reported efficiency and effectiveness gains through CL <cit.>, especially in cases of harder tasks and limited or noisy data <cit.>.
Existing CL approaches are designed to learn a single curriculum that works best for a given model and dataset. However, effective training could be achieved in multiple ways. In addition, existing approaches quantify sample difficulty through model behavior during training. Although efficient and effective, model behavior can be affected by initialization and training dynamics <cit.>, which limits the curriculum space that can be examined for finding effective curricula.
This paper advocates a re-imagining of CL paradigms by introducing and formalizing the task of curriculum discovery, which aims to find effective curricula for a given model and dataset over a curriculum space. The present work specifically focuses on determining when and in which difficulty order text data samples should be learned for effective training of NLP systems. We propose a framework that employs prior knowledge about sample difficulty, such as entropy in human annotations, to
inform an effective and flexible sample weighting scheme for curriculum discovery.
The framework is capable of discovering optimal curricula (within the space of its weight functions) for any given model and dataset by optimizing
the weight functions
and adjusting the difficulty group of data samples as training progresses. The discovered curricula provide useful insights about datasets and models, such as the relative importance of different groups of samples for models or knowledge dependency among samples.
We illustrate that the proposed framework has the potential to encompass some of the existing CL approaches.
Experimental results show that
(a): the top-performing discovered curricula for the same model and dataset can be fundamentally dissimilar in their training strategies,
indicating that effective training can be achieved in multiple ways;
(b): the discovered curricula are often non-monotonic and greatly differ from the known strategies reported in existing literature, indicating that existing curricula, including easy-to-hard transition curricula, are at the risk of underperforming; and
(c): the curricula discovered on small datasets and models perform exceptionally well on larger datasets and models respectively, illustrating the transferability of the discovered curricula.
The paper presents a new curriculum learning approach that unlike existing approaches can discover multiple high-performing (and often diverse) curricula for each given NLP model and dataset, provide interpretable curricula in terms of sample difficulty, and encompass some of the existing curriculum learning approaches.[Code and data are available at <https://clu.cs.uml.edu/tools/curriculum_discovery.html>.]
§ RELATED WORK
Existing CL approaches are designed to learn a single curriculum that works best for a given model and dataset. They estimate sample difficulty through model behavior during training, quantified by the
instantaneous loss <cit.>,
consistency in instantaneous loss <cit.>,
moving average of loss <cit.>,
transformations of loss <cit.>,
loss regularization <cit.>, or
learnable per-sample confidence <cit.>.
In terms of data ordering,
sub-sampling approaches sample the easiest or hardest instances at every training iteration <cit.>,
sample weighting techniques weight instances according to their estimated difficulty <cit.>, and
sample pruning techniques filter hard or noisy instances from data prior to training <cit.>.
Sub-sampling methods can be cumulative, exclusive or a combination of both. Cumulative approaches add new samples to the ones that have been previously used for training <cit.>, while exclusive approaches create a new subset of the data at every training stage <cit.>.
In addition, previous research has developed model-driven <cit.> and task-driven <cit.> techniques.
§ CURRICULUM DISCOVERY FRAMEWORK
We consider the training dataset 𝒟 = {(_1, y_1), … ,(_n, y_n)} of size n, where _i denotes the ith training sample with the ground-truth label y_i and ψ∈ [0,1]^n indicates the initial difficulty estimates of training samples, see <ref>.
The data is initially clustered into k groups of increasing difficulty, e.g. {easy, medium, hard} groups for k=3, which can be achieved using difficulty score percentiles
or 1-dimensional K-means applied to .
As Figure <ref> shows, the framework develops a separate parameterized weight function for each difficulty group (<ref>), and dynamically weights training samples and adjust their difficulty groups according to the training progress of the downstream model (<ref>). Specifically, at training iteration t, the weighted loss l̂_i for sample i of the difficulty group c ∈{1,…, k} will be computed as follows:
l̂_i = w(t; r_c, s_c) × l_i,
where l_i is the instantaneous loss of sample i, and w(t; r_c, s_c) is the weight of sample i in its difficulty group c at training iteration t, with class-specific weight function parameters r_c and s_c (see below).
§.§ Monotonic Curricula
We define a curriculum using the generalized logistic function <cit.> of the form:
w(t; r, s) = 1/1+exp(-r × (t-s)),
where r∈ is the rate-of-change parameter, which specifies how fast the weight can increase (r>0) or decrease (r<0); t∈[0,1] is the training progress (typically iteration number divided by max iterations); and s∈ shifts the pivot weight of the logistic function (w(.)=.5) to the left or right such that at t=s the weight is 0.5. Figure <ref> illustrates the effect of these parameters. Greater absolute values for the rate parameter enforce faster rates of change in weights, while greater values of the shift parameter enforce longer delays in reaching the pivot weight of 0.5. These parameters provide flexibility in controlling sample weights during training, which is key for deriving effective curricula.
The above function can approximate existing predefined curricula. For example, Figure <ref> shows a specific configuration for the logistic functions for standard CL <cit.>, where training starts with easier samples and gradually proceeds with harder ones.
§.§ Non-monotonic Curricula
Although the generalized logistic function in (<ref>) can lead to effective curricula, monotonic functions are limited in their coverage capacity. For example, they do not allow easy samples with low weights to become important again (receive high weights) at later stages of training to mitigate forgetting, which is a major challenge for effective curriculum learning <cit.>.
We address this challenge by extending the framework to non-monotonic curricula, where samples can move between difficulty classes based on their learning progress during training. We quantify learning progress for training samples based on the deviation of their losses from the average losses of their corresponding difficulty groups.
At every iteration, samples with loss values greater
than the average are promoted to their immediate higher difficulty groups and
the rest are demoted to their immediate lower difficulty groups.
These movements allow monotonic weight functions result in non-monotonic and multimodal weight trajectories for training samples, which improves the search capability of our framework and addresses the forgetting challenge.
§.§ Parameter Optimization
We find the optimal
curriculum parameters (r,s) for each difficulty group using the
Tree-structured Parzen Estimator (TPE) algorithm <cit.>, which, unlike the grid or random search, traverses the parameter space by estimating the parameters that are most probable to perform better on a trial.
Using this method, we can learn data-driven curricula beyond what could be manually designed through empirical settings or choices among the limited ordering strategies.
The discovered curricula are optimal within our search space, as defined by the weight functions and searchable parameters. However, in practice, we observed that the change in performance across the missing regions in the search space
is minor. Given that our weight functions can approximate other curricula learned by existing CL models, see <ref>, we expect the optimum curriculum within our search space closely approximates the optimal curriculum for each dataset and model pair.
§.§ Prior Knowledge of Difficulty
Annotation entropy is a natural measure of difficulty (for humans) and may serve as a reliable difficulty metric for models. Entropy of each sample x_i is calculated as -∑_l p_c logp_c <cit.>, where c is a class category and p_c is the fraction of annotators who chose label c for the sample. The use of entropy is supported in <cit.>, reporting a consistent positive correlation between model accuracy and level of human agreement.
Furthermore, moving average of a sample's instantaneous loss is a good metric for difficulty <cit.>. Using a baseline model trained with no curriculum and with default hyper-parameters, we collect the loss values of all training instances at intervals of 0.5 epochs and use the average loss as prior knowledge about sample difficulty. We obtain twenty observations of the loss and compute the average for each sample.
Figure <ref> shows the distributions of entropy and loss, and examples of data partitions across four datasets. Most datasets are highly imbalanced across difficulty groups, often containing more easier samples than harder ones. Such data disparities would perhaps explain why
computational models can achieve human-level performance on complex NLP tasks or recent results reporting neural models being largely invariant to random word order permutation of data <cit.>.
We acknowledge that while multiple annotations per sample may not be readily available for many NLP datasets, such annotations were collected for most NLP datasets at their dataset development time. Our work shows that such information can be used to find effective curricula for NLP models and encourages dataset creators to publish their full annotation information. In addition, our curriculum discovery framework is independent of annotation information. In fact, we evaluated our approach with both annotation entropy and loss as two choices for sample-level difficulty estimation.
§ EXPERIMENTS
§.§ Datasets
For the purpose of our experiments, we chose datasets for which several annotations per sample are available. Such annotator-level information is often available at the creation time of most NLP datasets and provide rich information for effective learning. Before training, we partition each dataset into k difficulty groups using {i/k}_i=0^i=k quantiles.
SNLI <cit.>. The Stanford Natural Language Inference (SNLI) benchmark <cit.> contains
36.7k and 2.6k samples annotated by 5 and 4 workers respectively, which we refer to as SNLI full in our experiments.
ChaosNLI <cit.> contains 100 annotations per sample for about 1.5K development samples of SNLI and
MNLI <cit.>. We use these samples as training data, the remaining 8.5K development samples of SNLI as development set, and the test set of SNLI as test set.
Twitter <cit.>. This dataset has been developed to obtain population-level statistics of alcohol use reports through social media.
It contains more than 9k
tweet, annotated by at least three workers for report of first-person alcohol use, intensity of the drinking (light vs. heavy), context of drinking (social vs. individual), and time of drinking (past, present, or future). We define a multi-class classification task for this dataset based on the above categories, see the data distribution in Appendix <ref>. We randomly split the data into 5.4k, 1.8k and 1.8k training, development and test sets.
Reddit. We developed this dataset to obtain population-level statistics of cancer patients. It contains 3.8k Reddit posts annotated by at least three annotators for relevance to specific cancer types.
We define a multi-class classification task based on post relevance and cancer type, see
Appendix <ref>. We randomly split the data into 2.2k, 765, and 765 training, development and test sets respectively.
ChaosNLI is balanced in its difficulty groups. We create difficulty-balanced versions of SNLI, Twitter and Reddit by collecting an equal number of samples from each difficulty group. The resulting datasets contain 1.7K to 2.3K samples.
§.§ Baselines
No-CL The conventional training approach, which involves utilizing all samples for training in each iteration.
Self-paced Learning (SPL) <cit.> weights instances based on their difficulty to the model by optimizing the following objective:
ℒ(𝒟; θ) =min_v∑_i^n v_i l_i + f(v; λ),
where l_i is the loss of instance i parameterized by θ, v_i is a trainable weight parameter assigned to each instance, and f is a regularization function for the weights.
The model finds v that minimizes its loss under the constraint of f.
The binary scheme SPL is defined by the regularization function f(v; λ) = - λv_1;
if l_i < λ, v_i = 1, otherwise v_i = 0, i.e., only easy samples are selected at each step.
Mentornet <cit.> uses an auxiliary network to weight samples at every iteration. The network takes as input recent loss history,
running mean of the loss, current epoch number (to account for training progress), and target labels. The network consists of an LSTM layer to encode the k steps of loss, embedding matrices for the target label and epoch number; a fully connected layer; and a final sigmoid layer. The sigmoid layer outputs weights of samples for training.
Difficulty Prediction (DP) <cit.> defines sample difficulty
as follows:
d_i = ∑_j = 1^l_if(y_i^(j), ŷ_i)/l_i,
where ŷ_i is the ground truth label and f measures the Spearman's rank correlation coefficient between labels produced by experts and non-experts. The model re-weights samples for performance improvement using a pre-defined threshold τ,:
1 - αd_i - τ/1 - τ.
SuperLoss (SL) <cit.> uses the following function
to estimate sample weights:
ℒ_λ = (l_i - τ) σ_i + λ (logσ_i)^2,
where τ is the moving average of loss (as the measure of difficulty) and σ is sample confidence.
The model emphasizes easy samples (those with small losses)
throughout the training.
Our approach employs two difficulty scoring functions and two curriculum types for each dataset.
The difficulty scoring functions are Loss and Ent (entropy) described in <ref>.
The first curriculum type (inc) is the off-the-shelf gradually increasing approach in Figure <ref>, which is rapidly computed and applied to all models, resulting in Ent(inc) and Loss(inc) approaches. The non-monotonic version of the inc curriculum (<ref>) are labeled Ent+(inc) and Loss+(inc). The second curriculum type (sp, for specialized) is obtained through the proposed optimization approach (<ref>) that finds optimal curricula for each model and dataset, resulting in Ent(sp) and Loss(sp).
§.§ Settings
We use bayesian optimization to tune the parameters λ of SL and α and τ of DP on development data. The optimal values found are λ = 1.2, α = 0.9 and τ is set dynamically upon loading the dataset to the 50 percentile difficulty value of the training data.
We use twitter-roberta-base for Twitter and roberta-base for other datasets, both from <cit.>.
We set learning rate to 1 × 10^-5, batch size to 16, epochs to 10 (we confirm that this number of iterations is sufficient for all models to converge), and use Adam optimizer <cit.>. The checkpoint with the best performance is used for testing. For each experiment, we train the model
using five random seeds
and report standard error.
In addition,
we set the search space for the rate (r) and shift (s) parameters to [-10, 10] with a step of 2 and [-0.5, 1.5] with a step of 0.25 respectively.
The search is run for at least 100 trials using the method described in (<ref>). Each trial is run with three seeds and the result is averaged. The search objective is to maximize accuracy over development data. The trial number in which the best parameters are found is reported in Appendix <ref>.
We only search for curricula with three difficulty groups to ease interpretability and improve readability, and to minimize the number of search parameters. However, in case of inc curriculum, the optimal number of difficulty groups for ChaosNLI, SNLI, Twitter, Reddit are 12, 3, 28, and 12 respectively; in all cases, we tune the number of groups on the development set and evaluate on the best performing one. Appendix <ref> includes the results of tuning the number of groups.
§.§ Curriculum Discovery Improves Models
Table <ref> shows that the gradually increasing curriculum using entropy, Ent (inc), achieves better accuracy than No-CL and other baselines, and the difference is significant. The gain is often greater with more than 3 difficulty groups, see detail results in Figure <ref>, Appendix <ref>. Both (inc) and the specialized (sp) curricula often perform better than the baselines. On average, entropy as scoring function performs better than loss, indicating prior knowledge based on difficulty to humans is useful to the model. The results also show that non-monotonic curricula (Ent+, Loss+) can further improve the performance; we attribute this result to the ability of the non-monotonic curricula to dynamically adjust the difficulty of samples according to model behavior as training progresses, allowing easier or harder samples to the model accumulate in the easier and harder difficulty groups. The performance improvement is more pronounced on the difficulty balanced datasets compared to full datasets, which can be attributed to the balanced nature or smaller size of these datasets.
§.§ Discovered Curricula Are Non-monotonic
Figure <ref> shows the mean and 95% CI of the top 25 performing curricula.
The resulting curricula are non-monotonic and greatly differ from the known strategies reported in literature, such as gradually increasing difficulty or anti-curriculum. In addition, the weights of hard samples tend to decrease, supporting the hypothesis that these instances may be too difficult or noisy for models to learn.
In addition, in SNLI and Twitter easy samples often carry the most significant weight, unlike Reddit, where easy samples are often down-weighted early during the training. These weighting patterns reveal the relative importance of samples in each dataset.
Finally, the full SNLI dataset with entropy partitions provides useful information. In Figure <ref>, hard samples are assigned weights around 0.5, unlike the three other cases of SNLI. We attribute this result to the reduced presence of hard samples (skewed entropy in Figure <ref>).
§.§ Discovered Curricula Are Generalizable
Figure <ref> shows the accuracy obtained when the top-performing discovered curriculum for one dataset (from Figure <ref>) is applied to other datasets. Each cell is the average result of 5 seeds. We observe common characteristics among datasets that cause the curriculum to be transferable between them.
First, the top generalizable configuration is obtained from ChaosNLI, the dataset with the richest inter-annotator entropy signal. Therefore, the quality of the difficulty score is important to the discovery of an effective curriculum.
Second, the inc configuration is among the most generalizable configurations, with no added cost in its creation.
Third, the curricula obtained using the small, down-sampled difficulty-balanced datasets generalize well and achieve high performance on the large datasets. This is useful as
curriculum discovery is much faster on smaller datasets, and the framework can be applied to large datasets by searching for a curriculum on a small subset of the data, mitigating the computational expenses of
using full datasets.
Fourth, as noted previously, instances of the Reddit dataset consist of long paragraphs, causing high variance in models trained using the dataset. Consequently, the curricula obtained using the Reddit and loss as measure of difficulty are of lower quality and perform poorly. Appendix <ref> reports the results of all configurations.
Table <ref> shows the transferability of discovered curricula across model sizes. We consider three models with increasing sizes applied to ChaosNLI: distilroberta-base with 82M parameters, roberta-base with 125M parameters, and bart-large with 406M parameters. The results show that the curricula discovered for small models
are transferable to larger models, with significant improvement over No-CL and other CL baselines. In particular, we observe greater transferability for smaller model sizes, which indicates curriculum discovery is more beneficial to smaller models than larger (more robust) models. In some cases, the curricula discovered for smaller models perform better than those discovered for larger models, see Ent(sp) 82M and 125M. This is because curriculum discovery is less expensive on smaller models, allowing better exploration of curriculum space to find better curricula.
Figure <ref> shows the curricula obtained using models of different sizes. The three curricula are similar in their relative treatment of difficulty groups: samples from the easy class are assigned higher weights than those from the medium class, and medium samples receive higher weights than hard samples. In addition, hard samples are considerably down-weighted, which indicates deemphasizing hard samples during training can lead to better results on the test data of ChaosNLi.
§.§ Potential to Encompass Existing Models
The framework presented in this paper is capable of representing curriculum learning approaches that prune noisy data, e.g. <cit.>,
use different sub-samples of data during training, e.g. <cit.>, and
re-weight loss according to sample difficulty, choosing to emphasize either easy or hard samples, e.g. <cit.>.
First, data pruning can be achieved by assigning negative values to the rate and shift parameters in our framework, r and s in (<ref>), which
cause the weights to approach zero before training begins.
Second, data sub-sampling can be represented by “inc” in Figure <ref>.
Third, approaches that estimate sample confidence based on loss <cit.> tend to generate monotonic curves over the course of training because training loss tends to be non-increasing at every step. Figure <ref> shows the confidence scores assigned to our data by three loss re-weighting approaches. The results are generated by our implementations of the three approaches, where each model runs with five random seeds. The partitioning of easy, medium, and hard is according to the entropy, as described in <ref>. We record the average weight assigned to each group. The result is averaged over all the runs, and the shaded area indicates the 95% confidence interval (CI). The results show that the confidence scores assigned by these approaches follow a monotonic curve that can be approximated by our curriculum discovery framework.
We note that although the weight scale of SuperLoss <cit.> in Figure <ref> is larger than one, this model can still be represented by our framework because the increased scale corresponds to scaling of the learning rate, as shown:
θ_t = θ_t-1 - η∇1/n∑_iσ_i l_i
= θ_t-1 - (η·σ_max) ∇1/n∑_iσ_i/σ_max l_i,
where l_i and σ_i are the instantaneous loss and confidence of sample i respectively. Therefore, the proposed framework can also represent CL approaches with a confidence scale larger than one.
§ CONCLUSION AND FUTURE WORK
We introduce an effective curriculum learning framework that employs prior knowledge about sample difficulty in its training paradigm for curriculum discovery. The proposed framework initially partitions its input data into several groups of increasing difficulty, defines parameterized functions to weight sample losses in each difficulty group, moves samples across difficulty groups based on their learning progress, and enables tuning the parameters of the weight function to discover novel curricula. We demonstrate that this framework is capable of representing several categories of curriculum learning approaches.
The task of curriculum discovery alleviates the limitations imposed by selecting a single curriculum strategy, and instead, focuses on finding and analyzing different curricula that work equally-well for a given model and dataset. In addition, the discovered curricula provide insight into how different portions of the dataset contribute toward learning at different stages of training a model, which, in turn, provide knowledge about the learning dynamics of different models. The task of curriculum discovery could be costly on large datasets, in particular, when the goal is to find optimal curricula for different models and datasets. To mitigate the computational cost, we show that it is possible to rapidly discover a curriculum on a small subset of the dataset (or a smaller version of the model with significantly less number of parameters) and apply the resulting curriculum to the full dataset.
There are several promising areas for future work. These include approaches for learning new difficulty indicators from data (e.g., linguistic difficulty including lexical, syntactic and semantic difficulty), prioritizing medium level instances and those with greatest progress during training, and developing challenge datasets that contain diverse data samples with different levels of difficulty. Finally, investigating diverse curricula that are suitable for general use and across datasets through curriculum discovery and generalization is a promising area for research.
§ LIMITATIONS
The present work investigates the use of two sample difficulty scoring functions, human-induced annotation entropy and model-induced loss, for NLP models and datasets. The former requires the availability of multiple annotations per sample and the latter requires training an auxiliary model to compute sample instantaneous loss during the course of training. Our work does not provide a general solution to the choice or availability of good difficulty scoring functions.
However, once such a function is available, our work presents solutions to the problem of finding high-performing curricula in curriculum space. Our approach, although effective at finding such curricula, requires a Bayesian search of its hyperparameters. We reduce these costs by finding curricula on smaller datasets and smaller models that can then be applied to corresponding larger datasets and models. Finally, the proposed method
lacks theoretical analysis of the dynamic interactions between data, downstream models, and discovered curricula.
acl_natbib
§ DATA CATEGORIES DISTRIBUTION
Table <ref> shows the target class distributions of the Reddit and Twitter datasets.
§ FINER-GRAINED DIFFICULTY CLASSES
Figure <ref> shows the effect of different number of difficulty classes on he accuracy of models trained with our inc curriculum (see <ref>). The results show that the number of difficulty classes used is an important factor in our framework, and further tuning of this parameter can further improve the performance of our model.
§ CURRICULUM SEARCH COMPUTATIONAL COST
With our experimental settings, it takes around 15 minutes on average to train a base model on our datasets of up to 3k samples using a single GPU. Therefore, a curriculum search take around 9 hours (36 trials) to around 35 hours (139 trials) using a single GPU.
§ EXTENDED CONFIGURATION GENERALIZABLITY EXPERIMENTS
Figure <ref> shows the result of every model trained using every specialized curricula (and inc). We see that the generalizable curricula that are effective on small (down-sampled) datasets, also tend to perform well on large (full) datasets.
|
http://arxiv.org/abs/2307.04459v1 | 20230710101228 | Thermal fluctuation, deflection angle and greybody factor of a high-dimensional Schwarzschild black hole in STVG | [
"Qian Li",
"Yu Zhang",
"Qi-Quan Li",
"Qi Sun"
] | gr-qc | [
"gr-qc"
] |
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
[email protected] (Corresponding author) Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
In this work, we study the thermal fluctuation, deflection angle and greybody factor of the high-dimensional Schwarzschild black hole in scalar-tensor-vector gravity (STVG). Based on the correction of black hole entropy due to thermal fluctuation, we calculate some thermodynamic quantities associated with the correction of black hole entropy. The influence of the first-order and second-order corrections, spacetime dimensionality and STVG parameters on these thermodynamics quantities are discussed in detail. Additionally, by utilizing the Gauss-Bonnet theorem, the deflection angle is obtained in the weak field limit and the effect of two parameters on the results is visualized. Finally, we calculate the bounds on greybody factors of a massless scalar field.
Thermal fluctuation, deflection angle and greybody factor of a high-dimensional Schwarzschild black hole in STVG
Qi Sun
August 12, 2023
================================================================================================================
§ INTRODUCTION
Although Einstein's general relativity is one of the successful and well-established gravitational theories in modern physics, general relativity fails to explain many observational results, such as the present stage of cosmic acceleration <cit.>, rotation curves of galaxies <cit.> and some cosmological data <cit.>. Moreover, general relativity has inherent deficiencies in the theory, such as the presence of spacetime singularities. Therefore, the problems of general relativity motivate us to research the alternative gravity theories. One of the modified gravity theories is the scalar-tensor-vector gravity (STVG) proposed by Moffat <cit.>, which is based on the action principle and is presented by the metric tensor, three scalar fields and a massive vector field. Moffat gave the black hole solution in STVG in another paper <cit.>. What's more, this modified gravity (MOG), i.e., STVG may be considered an alternative to the dark matter problem, which can be solved by changes in the gravity sector. STVG was able to fit the rotation curves of galaxies <cit.> without considering dark matter and was showing no difference with solar system observational tests. However, Jamali and his colleagues <cit.> found that a modified version of the STVG, known as mMOG, cannot be deemed as an alternative to the dark matter problem when new constants are introduced in the kinetic term of the scalar field as its coefficients.
The interest in the physical properties of high-dimensional black holes significantly increases, even though high-dimensional black holes have not been directly observed or experimentally supported in comparison with the four-dimension black hole. This has a lot to do with the development of string theory. In addition, the theoretical importance of higher dimensional black hole solutions was introduced by Emparan and Reall <cit.>. Tangherlini <cit.> proposed firstly the solutions of the Schwarzschild and Reissner-Nordström black holes in D dimensional spacetime. Later, Myers et al. obtained the Kerr black hole solution in high dimensional spacetime in Ref. <cit.>. Recently, Cai et al. <cit.> derived a high-dimensional static spherically symmetric Schwarzschild black hole in STVG, which is a high dimensional extension of STVG theory, and studied its quasinormal modes of a massless scalar field and black hole shadow. This black hole solution is a link between Einstein's theory and STVG theory. Specifically, this black hole degenerates to Schwarzschild-Tangherlini black hole in Einstein's theory with the coupling constant α being zero.
The black hole entropy is proportional to the area of the event horizon of the black hole, known as the Bekenstein-Hawking formula <cit.>. The black hole entropy is maximum compared with the objects of the same volume in order to avoid the violation of the second law of black hole thermodynamics. However, due to thermal fluctuation which leads to the concept of the holographic principle <cit.>, the maximum entropy of black holes may be corrected. The corrected term for maximum entropy is generated by the quantum fluctuations in the spacetime geometry rather than the matter field in the spacetime. For large black holes, quantum fluctuations are negligible. When the size of black hole reduces due to Hawking radiation, however, the quantum fluctuations in the spacetime geometry will increase. Thus, there is a logarithmic correction at leading order in black hole entropy <cit.>. Upadhyay investigated the effect of thermal fluctuations on a quasitopological black hole and found the negative correction term result leads to a local instability of black holes <cit.>. The influence of logarithmic corrections on the thermodynamics due to thermal fluctuations for a dilaton black holes in gravity's rainbow has been studied in Ref. <cit.>. There are several works are devoted to studying the thermal fluctuation effects on black hole thermodynamics <cit.>.
Hawking believed that black holes are not completely black objects and can emit radiation, known as Hawking radiation <cit.>. This lays an important foundation for understanding the thermodynamics of black holes. The Hawking radiation detected at infinity of the black hole differs by a redshift factor, called as greybody factor, from the authentic radiation detected at the black hole horizon. The greybody factor that derives from the transmission amplitude can provide information related to the quantum nature of the black hole <cit.>. There are several methods to calculate the greybody factor such as the bounds on greybody factors <cit.>, the WKB method <cit.> and the exact numerical approach <cit.>. In this paper, we choose the bounds on greybody factor due to the fact that it can provide analytical results for the intermediate frequencies and all angular momentum.
When light ray encounters a dense compact object in its trajectory toward a distant observer, the observer will find the light ray has a deflection angle. That is to say, the compact object bends the light ray, which forms gravitational lensing. So gravitational lensing which can be classified as strong gravitational lensing, weak gravitational lensing and micro gravitational lensing is used as a special astronomical tool to check whether general relativity theory is correct. Concretely, the strong gravitational lensing is used to calculate the magnification and position of the black hole. The weak gravitational lensing can help us to measure different objects' masses or restrict of the cosmological parameter. In addition, on the cosmic microwave background aspects, the weak gravitational lensing also has an important effect <cit.>. At present, strong or weak gravitational lensing of compact objects, such as wormholes, black holes and cosmic strings has been widely considered <cit.>. Part of the work in the above literature is based on Gauss-Bonnet theorem to calculate the deflection angle for the weak gravitational lensing. The Gauss-Bonnet theorem proposed by Gibbon and Werner <cit.> in 2008, is used to derive the deflection angle for the first time in the context of optical geometry. Since then, this method has been applied to the weak deflection angles of different black holes <cit.>. We will also research the weak gravitational lensing of a high-dimensional Schwarzschild spacetime in STVG by using Gauss-Bonnet theorem.
Motivated by the above, the purpose of the paper is to study the thermal fluctuation, weak deflection and grey-body factor of the high-dimensional Schwarzschild black hole in STVG. The present paper is structured as follows. In section <ref>, we briefly introduce a high-dimensional Schwarzschild black hole solution in STVG. Then, we review the physical features of this black hole. In section <ref>, we study the corrected thermodynamic quantities due to thermal fluctuation. Section <ref> is devoted to calculating the weak deflection angle using Gauss-Bonnet theorem. We discuss the bounds on greybody factors in section <ref>. In the last section, our conclusions are summarized.
Throughout this paper, the natural system of units (G_N=ħ=c=1) is adopted.
§ FUNDAMENTAL SPACETIME
In the section, we introduce the high-dimensional Schwarzschild spacetime in
scalar-tensor-vector gravity (STVG) and simply review some thermodynamical properties. The general action of the STVG theory in D-dimensional spacetime takes the form <cit.>
S_L=S_GR+S_ϕ+S_S+S_M,
where
S_ GR=1/16π∫ d^Dx√(-g)1/GR,
S_ϕ=-1/4π∫ d^Dx√(-g)(K-1/2μ̃^2ϕ ^μϕ _μ),
S_S =∫ d^D x √(-g)[1/G^3(1/2 g^μν∇_μ G ∇_ν G-V_G(G)) +1/μ̃^2 G(1/2 g^μν∇_μμ̃∇_νμ̃-V_μ̃(μ̃))],
here S_GR is the Einstein-Hilbert action, S_ϕ stands for the action of a massive vector field ϕ^μ, S_S denotes the action of the scalar field and S_M represents the matter action. The black hole metric in the D-dimensional spacetime has the following form
ds^2=-f(r)dt^2+dr^2/f(r)+r^2dΩ ^2_D-2,
with the line element f(r) being <cit.>
f(r)=1-m/r^D-3+Gq^2/r^2(D-3),
where G is the Newton's gravitational constant, G=G_N(1+a). And m and q are defined by
m≡16π GM/(D-2)Ω _D-2, q≡8π√(a G_N)M/√(2(D-2)(D-3))Ω _D-2,
where the dimensionless parameter a in the form is regarded as a deviation of the STVG theory from standard general relativity theory and M is the black hole mass. Moreover, Ω_D-2 denoting the volume of unit (D-2)-dimensional sphere has the form
Ω_D-2=2π^D-1/2/Γ (D-1/2).
When the dimensionless parameter a, we can get a Schwarzschild-Tangherlini black hole in Einstein's gravity. Moffat gave a Schwarzschild black hole in STVG for the case D=4 <cit.>. Moreover, one can find that there is a similarity between a high-dimensional Schwarzschild black hole in STVG and a high-dimensional Reissner-Nordström black hole in Einstein gravity from the metric <cit.>. The high-dimensional Schwarzschild STVG black hole possesses up to two horizons
r_±=(m/2±√(m^2-4Gq^2)/2)^2,
where r_- and r_+ represent the Cauchy horizon and the event horizon, respectively. But Mureika et al. <cit.> pointed out that the Schwarzschild black hole in STVG, i.e., MOG black hole, relies only on the mass M and dimensionless parameter a. So q is called the gravitational charge rather than charge.
The black hole mass in terms of r_+ has the form
M=r_+^D-3(A-√(A^2-4 G B^2))/2 G B^2,
where the coefficients A and B are expressed as
A≡16 π G/(D-2) Ω_D-2, B≡8 π√(a G_ N)/√(2(D-2) (D-3))Ω_D-2.
The Hawking temperature is given by
T_ H=1/4πdf(r)/dr|_r=r_+ =(D-3)(A √(A^2-4 G B^2 )-A^2+4 G B^2 )/8 π G B^2r_ +.
Also, the Bekenstein-Hawking entropy of this high-dimensional black hole, S_0, is given by
S_0= Ω_D-2 r_+^D-2/4.
§ THERMAL FLUCTUATIONS
In the section, we investigate the influence of thermal fluctuations on thermodynamic potential of a high-dimensional Schwarzschild black hole in STVG. First of all, we simply introduce the thermal fluctuation and then calculate some important modified thermodynamics quantities.
We can not neglect the influence of the thermal fluctuation on the black hole thermodynamics when the radius of the black hole decreases and the temperature of the black hole is large. The thermal fluctuation will be regarded as a perturbation around the state of equilibrium if it is small enough. Using the partition function approach, a general expression for the corrected entropy area relation is written as <cit.>
S=S_0 -αln(S_0T^2)+ λ/S_0,
where α is the leading order correction parameter and λ is the second order correction parameter. The leading order correction is a logarithmic term caused by the thermal fluctuations, and the second order correction proportional to the inverse to uncorrected entropy is produced by extending the entropy function around the equilibrium.
Using Eqs. (<ref>) and (<ref>), the corrected entropy of this high-dimension black hole is given as
S =1/4r_+^D-2Ω_D-2 + 4r_+^D-2λ/Ω_D-2-αln[(D-3)^2(A^2-4GB^2)(A-√(A^2-4GB^2))^2 r_+^D-4Ω_D-2/256G^2B^4π^2].
We draw the corrected entropy versus the event horizon radius for different parameters in Figs.<ref> and <ref>. As shown in Fig.<ref>, the presence of leading order correction leads to an increase in entropy for small values of the event horizon radius. However, the corrected entropy gradually decreases and recovers to the original entropy when with the increase of the event horizon radius. This means that the equilibrium of the small black hole is unstable due to Δ S >0 when the black hole is regarded as an isolated system. The right figure in Fig. <ref> shows that the inverse correction term has a significant influence on the entropy for a small black hole. In fact, compared to the large black hole, the thermal fluctuation has a greater impact on the small black hole. We also show the effect of spacetime dimensionality on the corrected entropy in the left figure of Fig.<ref>. We find that the change of corrected entropy is not only fast but also large in high-dimensional spacetime. So one can easily see that for a small or large black hole, the higher the dimension, the larger the corrected entropy, whereas the middle black hole is not the case. We also obtain from the left figure in Fig. <ref> that the STVG parameter a leads to a slight increase in corrected entropy.
We can calculate the Helmholtz free energy using the corrected entropy and temperature as
F =-∫ S d T = (D-3)√(A^2-4 G B^2)(A-√(A^2-4 G B^2))/8 G B^2π
×(-4 r_+^D-1λ/(D-1)Ω_D-2 + r_+^D-3Ω_D-2/4(D-3)+ α/r( D-4 + ln[ (D-3)^2√(A^2-4 G B^2)(A-√(A^2-4 G B^2))^2 r_+^D-4Ω_D-2/256 G^2 B^4π^2])).
In order to have a better understanding of the corrected Helmholtz free energy, we plot the Helmholtz free energy in terms of the event horizon for the different parameters α,λ, D, a in Figs.<ref> and <ref>. In Fig.<ref>, we can find that the Helmholtz free energy without any corrections is a function that increases monotonically and keeps positive. It is worth noting that the Helmholtz free energy becomes negative for a small black hole under the thermal fluctuation but returns to positive with the increase of event horizon radius. In contrast to the case of the small black hole, the presence of logarithmic correction term increases the Helmholtz free energy for the larger black hole. We can conclude that thermal fluctuation causes small black holes to be more stable. In addition, we also obtain from the left in Fig.<ref> that the impact of spacetime dimension on the modified Helmholtz free energy is similar to that of logarithmic correction. We can see the effect of parameter a on the corrected Helmholtz free energy in the right figure of Fig.<ref>. It is clear that the parameter a decreases the corrected Helmholtz free energy.
The internal energy as one of the thermodynamic quantities has the thermodynamics relationship E=F+TS, i.e.,
E =-1/32 π (D-1) G B^2Ω_D-2(4GB^2+A(√(A^2-4GB^2)-A)r_+^-D-3) r_+^-D-3
×(16(D-3)(D-2)r_+^4λ+(D-1)r_+^DΩ_D-2×(4(D-4)(D-3)r_+^2α+(D-2)r_+^DΩ_D-2)).
Figs.<ref> and <ref> present the behavior of corrected internal energy with increasing the event horizon radius for the different parameters α,λ, D, a. As it is shown in Fig.<ref>, the internal energy has a positive asymptotic value under thermal fluctuation for a small black hole whereas we can neglect the effect of thermal fluctuation when we increase the event horizon radius. We can see clearly that the higher the dimensionality of the black hole, the larger the corrected internal energy. However, the corrected internal energy decreases with the increase of the STVG parameter.
Next, we investigate the heat capacity of black hole, which can be written as C=(dU/ dT)_V=(d U/ dr)/ (dT/dr) using Eqs. (<ref>) and (<ref>), concretely
C=(D-4)α+4(D-2)r_+^D-2λ/Ω_D-2-1/4(D-2)r_+^D-2Ω_D-2.
We draw the behavior of heat capacity by figures of Figs.<ref> and <ref>. In Fig.<ref>, we observe that without any thermal fluctuation, the heat capacity is negative and thus black hole is thermodynamically unstable. The existence of thermal fluctuations causes small black holes to have positive heat capacity and thus there is a phase transition that shows the transition of the system from stable to unstable. Moreover, the critical point gradually moves to the right when we increase the correction coefficients α,λ. From Fig.<ref>, we can see that the phase transition occurs at a larger event horizon radius if spacetime dimensionality D increases. It is worth mentioning that the heat capacity of a high-dimensional Schwarzschild black hole in STVG recovers to that of Schwarzschild-Tangherlini black hole. That is to say, the STVG parameter does not affect the stability conditions of black holes.
§ WEAK DEFLECTION ANGLE
In this section, we would like to obtain the deflection angle in weak field limit using Gauss-Bonnet theorem. For equatorial plane θ =π/2 and null geodesic ds^2=0, the corresponding optical metric of a high-dimensional Schwarzschild black hole in STVG has the following form
dt^2=1/f^2(r)dr^2+r^2/f(r)dφ^2.
Afterwards, we can rewrite the optical metric using the coordinate transformation dr_*=1/f(r)dr as
dt^2= dr_*^2+ f̃^2(r_*)dφ^2,
where f̃(r_*)≡√(r^2/f(r)).
We obtain the Gaussian optical curvature as following <cit.>
K =RicciScalar/2 =1/4(D-3)r^1-4D(4(D-2)G^2q^4r^9-2(D-2)Mr^3D
-6(D-2)Gq^2r^6+D+((D-1)M^2+4(2D-5)Gq^2)r^3+2D).
Now, we can calculate the deflection angle utilizing Gauss-Bonnet theorem <cit.>. The domain D is deemed to be a subset of a compact, oriented surface, with Gaussian optical curvature K and Euler characteristic number χ(D) and ∂D is the piecewise smooth boundary of domain D with geodesic curvature κ. We consider α_i to be the i^th exterior angle. The Gauss-Bonnet theorem is that
∫∫_DKdS+∫_∂Dκd t+∑_iα_i=2πχ(D),
where dS stands for the surface element. In addition, the geodesic curvature κ along a smooth curve γ is written as κ=g(Δ_γ̇γ̇,γ̈) where γ̈ denotes unit acceleration vector. We consider that D is bounded by the geodesics γ_c and geodesic γ_R where γ_R is considered to be perpendicular to γ_c at the source S and the observer O, so κ (γ_c)=0 by definition. Then ∑_iα_i=α_S+α_O as well as χ(D)=1. Eq.(<ref>) reduces to
∫∫_DKdS+∫_γ_Rκ (γ_R)d t =π.
Utilizing the definition of geodesic curvature, the radial part of κ (γ_p) can be expressed as
κ (γ_p)= (Δ_γ̇_̇ṗγ̇_̇ṗ)^r=γ̇_R^ϕ(∂_ϕγ̇_R^r)+Γ_ϕϕ^r(γ̇_R^ϕ)^2,
where γ̇_R represents the tangent vector of geodesics γ_R and Γ_ϕϕ^r is the Christoffel symbol. When we consider γ_R:=R=const, the first term on the right side of the above equation equals zero and the second term is 1/R. So κ (γ_R) reduces to 1/R.
We can make a change of variables dt using the relevant optical metric (<ref>), which can be rewritten as dt=R dφ.
Eq.(<ref>) becomes
∫∫_DKdS+∫_0^π+αdφ =π.
Finally, we obtain the deflection angle <cit.>
α̂=-∫_0^π∫_b/ sinϕ^∞K dS.
Now, we can calculate the deflection angle of a high-dimensional Schwarzschild black hole in STVG for the different spacetime dimensionality. As an example, we calculate the deflection angle when D=4,5,6,7
α̂_D=4 = 2m/b-3 m^2π/16b^2-3Gπ q^2/4b^2+4Gmq^2/3b^3 +O(q^4/b^4),
α̂_D=5 =3mπ/4b^2-3m^2π/16b^4-15Gπ q^2/16b^4+15Gmπ q^2/32b^6+O(q^4/b^8),
α̂_D=6 =8m/3b^3-25m^2π/128b^6-35Gπ q^2/32b^6+512Gmq^2/315b^9
+O(q^4/b^12),
α̂_D=7 =15π m/16b^4-105m^2π/512b^8-315Gπ q^2/256b^8+1155Gmπ q^2/2048b^12
+O(q^4/b^16),
We draw the behavior of the deflection angle with respect to the impact parameter for different values of D and a in Fig.<ref>. It is clear that the higher the black hole dimension, the smaller the deflection angle. However, the STVG parameter has an increasing effect on the deflection angle, i.e., a high-dimensional Schwarzschild black hole in STVG leads to a larger deflection angle than a Schwarzschild-Tangherlini black hole.
§ GREYBODY FACTOR
In this section, we study the bounds on greybody factors for the massless scalar field. The massless scalar field Φ is represented by the Klein-Gordon equation <cit.>
1/√(-g)∂_μ(√(-g)g^μν∂_ν)Φ=0,
where g is the determinant of the metric tensor. In order to separate radial and angular variables, we have an ansatz Φ=e^-iω t Y_lm(Ω)Ψ(r) and make a change dr_*=dr/f(r). Substituting the above definitions and metric function Eq. (<ref>) into Eq. (<ref>), we obtain a Schrödinger-like wave expression
d^2Ψ(r)/d^2r_*+[ω^2-V_eff(r)]Ψ(r)=0,
in which ω donates frequency, l and m are the azimuthal quantum number and the spherical harmonic index, respectively.
The effective potential V_eff(r) can be written as
V_eff(r)=f(r)[l(D+l-3)/r^2+(D-2)(D-4)f(r)/4r^2+(D-2)f'(r)/2r].
To better understand the effect of the dimensionality of the spacetime and STVG parameter on the effective potential, we visualize the effective potential with respect to the black hole radius for different values of D and a in Fig.<ref>. Obviously, the dimensionality of the spacetime causes an increase in the effective potential whereas the STVG parameter has the opposite effect. We can expect the behavior of greybody factors from the effective potential.
The bounds on greybody factors can be expressed as <cit.>
T≥sech^2[∫_-∞^∞√((h')^2+(ω^2-V_eff-h^2)^2)/ 2hdr_*],
where h≡ h(r_*) and h(r_*)>0. h is an arbitrary function and satisfies h(-∞)=h(∞)=ω and there are two particular functional forms of h considered in Ref.<cit.>. Here we only consider the case h=ω. Thus Eq.(<ref>) is rewritten as
T≥sech^2[1/2ω∫_r_+^∞V_eff/f(r)dr].
After expanding the integral, we obtain the lower bound on the greybody factors
T ≥sech^2[-1/2ω((-8+2D+D^2-12l+4lD+4l^2)(1/4r_+)
-(-2+D)B^2(-16+3D)Gm^2/4(2D-5)r^5-2D_++(D-10)Am/4r^2-D_+)
].
Fig.<ref> demonstrates the behavior of the greybody factor for the high-dimensional Schwarzschild black hole in STVG. We observe that the greybody factor reduces with the increase of dimension from the left panel. That is to say, the greybody factor is suppressed in high-dimensional spacetime. It indicates that less massless scalar particles pass through the potential barrier and reach to spatial infinity in a higher dimensional black hole. Additionally, we observe that as the STVG parameter a increases, the greybody factor increases. That is, the STVG parameter makes the gravitational potential transparent.
§ CONCLUSION
In this paper, we analyzed thermal fluctuation, weak deflection angle and greybody factor for a high-dimensional Schwarzschild black hole in STVG.
First, we evaluated the influence of the logarithmic and higher-order corrections of the entropy on the Helmholtz free energy, internal energy and heat capacity and made a comparison to corrected and uncorrected thermodynamic properties. Overall, the corrected entropy as a consequence of thermal fluctuation presents the trend of decreasing first and then increasing, and the impact of thermal fluctuation is significant for a small black hole. Due to the effect of the dimensionality of spacetime, the curve of modified entropy has different intersections. This causes that for a small-size or large-size black hole, the corrected entropy increases with the spacetime dimensionality increases, whereas the middle black hole is not the case. The existence of the STVG parameter leads to a slight increase in corrected entropy. The black hole with small values of event horizon radius possesses the negative Helmholtz free energy because of the thermal fluctuation. The Helmholtz free energy increases monotonically with increasing values of the parameters D and a for a small-size black hole. For a larger black hole, the parameters D and a have the opposite effects on Helmholtz free energy. The internal energy remains positive and its behavior is similar to corrected entropy. The internal energy increases with the increase of dimensions, while it decreases as the STVG parameter increases. In addition, we found that thermal fluctuation makes the small-size black hole more stable from the analysis of Helmholtz free energy and heat capacity in all dimensional cases and the heat capacity is independent of the STVG parameter.
Second, we calculated the weak deflection angle with Gauss-Bonnet theorem. We have shown the expression of weak deflection angle for D=4,5,6,7. We have pointed out that in the higher dimensional spacetime the weak deflection angle gets weaker but the presence of the STVG parameter results in the increase of deflection angle.
Finally, we computed the greybody factors of the massless scalar field and then analyzed the effect of the spacetime dimensionality and STVG parameter on greybody factors. We found that the 4-dimensional black hole has the largest values of greybody factors whereas the 7-dimensional black hole possesses the smallest values. Moreover, we have seen that when the STVG parameter increases, the greybody factor increases. We got the fact that the more radiation can reach spatial infinity in 4-dimensional black hole with the larger value of STVG parameter.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
99
Astier:2012ba
P. Astier and R. Pain,
Observartional Evidence of the Accelerated Expansion of the Universe.
Comptes Rendus Physique 13 (2012), 521-538.
doi:10.1016/j.crhy.2012.04.009
Moffat:2013sja
J. W. Moffat and S. Rahvar,
The MOG weak field approximation and observational test of galaxy rotation curves.
Mon. Not. Roy. Astron. Soc. 436 (2013), 1439-1451.
doi:10.1093/mnras/stt1670
Planck:2015fie
P. A. R. Ade et al. [Planck],
Planck 2015 results. XIII. Cosmological parameters.
Astron. Astrophys. 594 (2016), A13.
doi:10.1051/0004-6361/201525830
Moffat:2005si
J. W. Moffat,
Scalar-tensor-vector gravity theory.
JCAP 03 (2006), 004.
doi:10.1088/1475-7516/2006/03/004
Moffat:2014aja
J. W. Moffat,
Black Holes in Modified Gravity (MOG).
Eur. Phys. J. C 75 (2015), 175.
doi:10.1140/epjc/s10052-015-3405-x
Brownstein:2005zz
J. R. Brownstein and J. W. Moffat,
Galaxy rotation curves without non-baryonic dark matter.
Astrophys. J. 636 (2006), 721-741.
doi:10.1086/498208
Jamali:2017zrh
S. Jamali, M. Roshan and L. Amendola,
On the cosmology of scalar-tensor-vector gravity theory,
JCAP 01 (2018), 048.
doi:10.1088/1475-7516/2018/01/048
Emparan:2008eg
R. Emparan and H. S. Reall,
Black Holes in Higher Dimensions.
Living Rev. Rel. 11 (2008), 6.
doi:10.12942/lrr-2008-6
Tangherlini:1963bw
F. R. Tangherlini,
Schwarzschild field in n dimensions and the dimensionality of space problem.
Nuovo Cim. 27 (1963), 636-651.
doi:10.1007/BF02784569
Myers:1986un
R. C. Myers and M. J. Perry,
Black Holes in Higher Dimensional Space-Times.
Annals Phys. 172 (1986), 304.
doi:10.1016/0003-4916(86)90186-7
Cai:2020igv
X. C. Cai and Y. G. Miao,
High-dimensional Schwarzschild black holes in scalar–tensor–vector gravity theory.
Eur. Phys. J. C 81 (2021), 559.
doi:10.1140/epjc/s10052-021-09351-x
Bekenstein:1973ur
J. D. Bekenstein,
Black holes and entropy.
Phys. Rev. D 7 (1973), 2333-2346.
doi:10.1103/PhysRevD.7.2333
Easther:1999gk
R. Easther and D. A. Lowe,
Holography, cosmology and the second law of thermodynamics.
Phys. Rev. Lett. 82 (1999), 4967-4970.
doi:10.1103/PhysRevLett.82.4967
Das:2001ic
S. Das, P. Majumdar and R. K. Bhaduri,
General logarithmic corrections to black hole entropy.
Class. Quant. Grav. 19 (2002), 2355-2368.
doi:10.1088/0264-9381/19/9/302
Upadhyay:2017qmv
S. Upadhyay,
Quantum corrections to thermodynamics of quasitopological black holes.
Phys. Lett. B 775 (2017), 130-139.
doi:10.1016/j.physletb.2017.10.059
Dehghani:2018qvn
M. Dehghani,
Thermodynamics of charged dilatonic BTZ black holes in rainbow gravity.
Phys. Lett. B 777 (2018), 351-360.
doi:10.1016/j.physletb.2017.12.048
Jawad:2017mwt
A. Jawad and M. U. Shahzad,
Effects of Thermal Fluctuations on Non-minimal Regular Magnetic Black Hole.
Eur. Phys. J. C 77 (2017), 349.
doi:10.1140/epjc/s10052-017-4914-6
Shahzad:2018znu
M. U. Shahzad and A. Jawad,
Thermodynamics of Black holes With Higher Order Corrected Entropy.
Can. J. Phys. 97 (2019), 742-751.
doi:10.1139/cjp-2018-0091
Sharif:2021vex
M. Sharif and Z. Akhtar,
Study of thermal fluctuations in five-dimensional rotating regular black hole.
Chin. J. Phys. 71 (2021), 669-682.
doi:10.1016/j.cjph.2021.04.005
Khan:2022zcf
Y. H. Khan and P. A. Ganai,
Remnants and thermal corrections in Horndeski black holes with non-minimal kinetic coupling.
Eur. Phys. J. Plus 137 (2022), 827.
doi:10.1140/epjp/s13360-022-03036-4
Ama-Tul-Mughani:2022wtg
Q. Ama-Tul-Mughani, A. Waseem, W. u. Salam and A. Jawad,
Greybody factor and thermal fluctuations of rotating regular black hole bounded by PFDM.
Chin. J. Phys. 77 (2022), 2213-2227.
doi:10.1016/j.cjph.2021.11.024
Chen:2021czh
X. Chen, X. Huang, J. Chen and Y. Wang,
Effect of thermal fluctuation on the thermodynamics of GMGHS black hole.
Gen. Rel. Grav. 53 (2021), 9.
doi:10.1007/s10714-020-02780-1
Upadhyay:2019hyw
S. Upadhyay, Nadeem-ul-islam and P. A. Ganai,
A modified thermodynamics of rotating and charged BTZ black hole.
JHAP 2 (2022), 25-48.
doi:10.22128/jhap.2021.454.1004
Khan:2021tzv
Y. H. Khan, S. Upadhyay and P. A. Ganai,
Stability of remnants of Bardeen regular black holes in presence of thermal fluctuations.
Mod. Phys. Lett. A 36 (2021), 2130023.
doi:10.1142/S0217732321300238
Hawking:1974rv
S. W. Hawking,
Black hole explosions.
Nature 248 (1974), 30-31.
doi:10.1038/248030a0
Hawking:1975vcx
S. W. Hawking,
Particle Creation by Black Holes.
Commun. Math. Phys. 43 (1975), 199-220
[erratum: Commun. Math. Phys. 46 (1976), 206].
doi:10.1007/BF02345020
Barman:2019vst
S. Barman,
The Hawking effect and the bounds on greybody factor for higher dimensional Schwarzschild black holes.
Eur. Phys. J. C 80 (2020), 50.
doi:10.1140/epjc/s10052-020-7613-7
Boonserm:2008zg
P. Boonserm and M. Visser,
Bounding the greybody factors for Schwarzschild black holes.
Phys. Rev. D 78 (2008), 101502.
doi:10.1103/PhysRevD.78.101502
Boonserm:2014fja
P. Boonserm, A. Chatrabhuti, T. Ngampitipan and M. Visser.
Greybody factors for Myers-Perry black holes,
J. Math. Phys. 55 (2014), 112502.
doi:10.1063/1.4901127
Boonserm:2017qcq
P. Boonserm, T. Ngampitipan and P. Wongjun,
Greybody factor for black holes in dRGT massive gravity.
Eur. Phys. J. C 78 (2018), 492.
doi:10.1140/epjc/s10052-018-5975-x
Okyay:2021nnh
M. Okyay and A. Övgün,
Nonlinear electrodynamics effects on the black hole shadow, deflection angle, quasinormal modes and greybody factors.
JCAP 01 (2022), 009.
doi:10.1088/1475-7516/2022/01/009
Kokkotas:2010zd
K. D. Kokkotas, R. A. Konoplya and A. Zhidenko,
Quasinormal modes, scattering and Hawking radiation of Kerr-Newman black holes in a magnetic field.
Phys. Rev. D 83 (2011), 024031.
doi:10.1103/PhysRevD.83.024031
Konoplya:2020jgt
R. A. Konoplya, A. F. Zinhailo and Z. Stuchlik,
Quasinormal modes and Hawking radiation of black holes in cubic gravity.
Phys. Rev. D 102 (2020), 044023.
doi:10.1103/PhysRevD.102.044023
Li:2022jda
Q. Li, C. Ma, Y. Zhang, Z. W. Lin and P. F. Duan,
Gray-body factor and absorption of the Dirac field in ESTGB gravity.
Chin. J. Phys. 77 (2022), 1269-1277.
doi:10.1016/j.cjph.2022.03.027
Harris:2003eg
C. M. Harris and P. Kanti,
Hawking radiation from a (4+n)-dimensional black hole: Exact results for the Schwarzschild phase.
JHEP 10 (2003), 014.
doi:10.1088/1126-6708/2003/10/014
Catalan:2014ama
M. Catalán, E. Cisternas, P. A. González and Y. Vásquez,
Quasinormal modes and greybody factors of a four-dimensional Lifshitz black hole with z=0.
Astrophys. Space Sci. 361 (2016), 189.
doi:10.1007/s10509-016-2764-6
Abedi:2013xua
J. Abedi and H. Arfaei,
Fermionic greybody factors in dilaton black holes.
Class. Quant. Grav. 31 (2014), 195005.
doi:10.1088/0264-9381/31/19/195005
Lewis:2006fu
A. Lewis and A. Challinor,
Weak gravitational lensing of the CMB.
Phys. Rept. 429 (2006), 1-65.
doi:10.1016/j.physrep.2006.03.002
Peloton:2016kbw
J. Peloton, M. Schmittfull, A. Lewis, J. Carron and O. Zahn,
Full covariance of CMB and lensing reconstruction power spectra.
Phys. Rev. D 95 (2017), 043508.
doi:10.1103/PhysRevD.95.043508
Pratten:2016dsm
G. Pratten and A. Lewis,
Impact of post-Born lensing on the CMB.
JCAP 08 (2016), 047.
doi:10.1088/1475-7516/2016/08/047
Chen:2015cpa
S. Chen and J. Jing,
Strong gravitational lensing for the photons coupled to Weyl tensor in a Schwarzschild black hole spacetime.
JCAP 10, 002 (2015).
doi:10.1088/1475-7516/2015/10/002
Chen:2016hil
S. Chen, S. Wang, Y. Huang, J. Jing and S. Wang,
Strong gravitational lensing for the photons coupled to a Weyl tensor in a Kerr black hole spacetime.
Phys. Rev. D 95, 104017 (2017).
doi:10.1103/PhysRevD.95.104017
Wang:2016paq
S. Wang, S. Chen and J. Jing,
Strong gravitational lensing by a Konoplya-Zhidenko rotating non-Kerr compact object.
JCAP 11, 020 (2016).
doi:10.1088/1475-7516/2016/11/020
Lu:2016gsf
X. Lu, F. W. Yang and Y. Xie,
Strong gravitational field time delay for photons coupled to Weyl tensor in a Schwarzschild black hole.
Eur. Phys. J. C 76, 357 (2016).
doi:10.1140/epjc/s10052-016-4218-2
Zhao:2016kft
S. S. Zhao and Y. Xie,
Strong field gravitational lensing by a charged Galileon black hole.
JCAP 07, 007 (2016).
doi:10.1088/1475-7516/2016/07/007
Zhao:2017cwk
S. S. Zhao and Y. Xie,
Strong deflection gravitational lensing by a modified Hayward black hole.
Eur. Phys. J. C 77, 272 (2017).
doi:10.1140/epjc/s10052-017-4850-5
Zhang:2017vap
R. Zhang, J. Jing and S. Chen,
Strong gravitational lensing for black holes with scalar charge in massive gravity.
Phys. Rev. D 95, no.6, 064054 (2017).
doi:10.1103/PhysRevD.95.064054
Abbas:2019olp
G. Abbas, A. Mahmood and M. Zubair,
Strong Gravitational Lensing for Photon Coupled to Weyl Tensor in Kiselev Black Hole.
Chin. Phys. C 44, 095105 (2020).
doi:10.1088/1674-1137/44/9/095105
Bergliaffa:2020ivp
S. E. P. Bergliaffa, E. E. d. Filho and R. Maier,
Strong Lensing and Nonminimally Coupled Electromagnetism.
Phys. Rev. D 101, 124038 (2020).
doi:10.1103/PhysRevD.101.124038
Wang:2019cuf
C. Y. Wang, Y. F. Shen and Y. Xie,
Weak and strong deflection gravitational lensings by a charged Horndeski black hole.
JCAP 04, 022 (2019).
doi:10.1088/1475-7516/2019/04/022
Kumaran:2019qqp
Y. Kumaran and A. Övgün,
Weak Deflection Angle of Extended Uncertainty Principle Black Holes.
Chin. Phys. C 44, 025101 (2020).
doi:10.1088/1674-1137/44/2/025101
Javed:2020frq
W. Javed, M. B. Khadim and A. Övgün,
Weak gravitational lensing by Bocharova–Bronnikov–Melnikov–Bekenstein black holes using Gauss–Bonnet theorem.
Eur. Phys. J. Plus 135, 595 (2020).
doi:10.1140/epjp/s13360-020-00619-x
Kumar:2020sag
R. Kumar, S. U. Islam and S. G. Ghosh,
Gravitational lensing by charged black hole in regularized 4D Einstein–Gauss–Bonnet gravity.
Eur. Phys. J. C 80, 1128 (2020).
doi:10.1140/epjc/s10052-020-08606-3
ElMoumni:2020wrf
H. El Moumni, K. Masmar and A. Övgün,
Weak deflection angle of light in two classes of black holes in nonlinear electrodynamics via Gauss–Bonnet theorem.
Int. J. Geom. Meth. Mod. Phys. 19, 2250094 (2022).
doi:10.1142/S0219887822500943
Javed:2020pyz
W. Javed, J. Abbas, Y. Kumaran and A. Övgün,
Weak deflection angle by asymptotically flat black holes in Horndeski theory using Gauss-Bonnet theorem.
Int. J. Geom. Meth. Mod. Phys. 18, 2150003 (2021).
doi:10.1142/S0219887821500031
Xu:2021rld
X. Xu, T. Jiang and J. Jia,
Deflection angle with electromagnetic interaction and gravitational-electromagnetic dual lensing.
JCAP 08, 022 (2021).
doi:10.1088/1475-7516/2021/08/022
Javed:2021arr
W. Javed, A. Hamza and A. Övgün,
Weak Deflection Angle and Shadow by Tidal Charged Black Hole.
Universe 7, 385 (2021).
doi:10.3390/universe7100385
Gao:2021luq
Y. X. Gao and Y. Xie,
Gravitational lensing by hairy black holes in Einstein-scalar-Gauss-Bonnet theories.
Phys. Rev. D 103, no.4, 043008 (2021).
doi:10.1103/PhysRevD.103.043008
Javed:2020lsg
W. Javed, A. Hamza and A. Övgün,
Effect of nonlinear electrodynamics on the weak field deflection angle by a black hole.
Phys. Rev. D 101 (2020), 103521.
doi:10.20944/preprints201911.0142.v1
Gibbons:2008rj
G. W. Gibbons and M. C. Werner,
Applications of the Gauss-Bonnet theorem to gravitational lensing.
Class. Quant. Grav. 25 (2008), 235009.
doi:10.1088/0264-9381/25/23/235009
Ishihara:2016vdc
A. Ishihara, Y. Suzuki, T. Ono, T. Kitamura and H. Asada,
Gravitational bending angle of light for finite distance and the Gauss-Bonnet theorem.
Phys. Rev. D 94 (2016), 084015.
doi:10.1103/PhysRevD.94.084015
Islam:2020xmy
S. U. Islam, R. Kumar and S. G. Ghosh,
Gravitational lensing by black holes in the 4D Einstein-Gauss-Bonnet gravity.
JCAP 09 (2020), 030.
doi:10.1088/1475-7516/2020/09/030
Zhu:2019ura
T. Zhu, Q. Wu, M. Jamil and K. Jusufi,
Shadows and deflection angle of charged and slowly rotating black holes in Einstein-Æther theory.
Phys. Rev. D 100 (2019), 044055.
doi:10.1103/PhysRevD.100.044055
Sakalli:2017ewb
I. Sakalli and A. Ovgun,
Hawking Radiation and Deflection of Light from Rindler Modified Schwarzschild Black Hole.
EPL 118 (2017), 60006.
doi:10.1209/0295-5075/118/60006
Jusufi:2018jof
K. Jusufi, A. Övgün, J. Saavedra, Y. Vásquez and P. A. González,
Deflection of light by rotating regular black holes using the Gauss-Bonnet theorem.
Phys. Rev. D 97 (2018), 124024.
doi:10.1103/PhysRevD.97.124024
Ovgun:2018fte
A. Övgün, İ. Sakallı and J. Saavedra,
Weak gravitational lensing by Kerr-MOG black hole and Gauss–Bonnet theorem.
Annals Phys. 411 (2019), 167978.
doi:10.1016/j.aop.2019.167978
Li:2020wvn
Z. Li, G. Zhang and A. Övgün,
Circular Orbit of a Particle and Weak Gravitational Lensing.
Phys. Rev. D 101 (2020), 124058.
doi:10.1103/PhysRevD.101.124058
Javed:2020fli
W. Javed, M. B. Khadim, A. Övgün and J. Abbas,
Weak gravitational lensing by stringy black holes.
Eur. Phys. J. Plus 135 (2020), 314.
doi:10.1140/epjp/s13360-020-00322-x
Belhaj:2020rdb
A. Belhaj, M. Benali, A. El Balali, H. El Moumni and S. E. Ennadifi,
Deflection angle and shadow behaviors of quintessential black holes in arbitrary dimensions.
Class. Quant. Grav. 37 (2020), 215004.
doi:10.1088/1361-6382/abbaa9
Pourhassan:2017kmm
B. Pourhassan, K. Kokabi and S. Rangyan,
Thermodynamics of higher dimensional black holes with higher order thermal fluctuations.
Gen. Rel. Grav. 49 (2017), 144.
doi:10.1007/s10714-017-2315-7
Mureika:2015sda
J. R. Mureika, J. W. Moffat and M. Faizal,
Black hole thermodynamics in MOdified Gravity (MOG).
Phys. Lett. B 757 (2016), 528-536.
doi:10.1016/j.physletb.2016.04.041
Pourhassan:2016zzc
B. Pourhassan and M. Faizal,
Thermodynamics of a sufficient small singly spinning Kerr-AdS black hole.
Nucl. Phys. B 913 (2016), 834-851.
doi:10.1016/j.nuclphysb.2016.10.013
Pourhassan:2017rie
B. Pourhassan, H. Farahani and S. Upadhyay,
Thermodynamics of higher-order entropy corrected Schwarzschild–Beltrami–de Sitter black hole.
Int. J. Mod. Phys. A 34 (2019), 1950158.
doi:10.1142/S0217751X19501586
Pourhassan:2018wjg
B. Pourhassan, M. Faizal and S. A. Ketabi,
Logarithmic correction of the BTZ black hole and adaptive model of Graphene.
Int. J. Mod. Phys. D 27 (2018), 1850118.
doi:10.1142/S0218271818501183
Bubuianu:2018qsq
L. Bubuianu and S. I. Vacaru,
Black holes with MDRs and Bekenstein–Hawking and Perelman entropies for Finsler–Lagrange–Hamilton Spaces.
Annals Phys. 404 (2019), 10-38.
doi:10.1016/j.aop.2019.02.013
Sharif:2022ccc
M. Sharif and A. Khan,
Thermal fluctuations, quasi-normal modes and phase transitions of regular black hole.
Chin. J. Phys. 77 (2022), 1885-1902.
doi:10.1016/j.cjph.2022.01.002
Sharif:2020hid
M. Sharif and Q. Ama-Tul-Mughani,
Phase transition and thermal fluctuations of quintessential Kerr–Newman-AdS black hole.
Phys. Dark Univ. 30 (2020), 100723.
doi:10.1016/j.dark.2020.100723
Berti:2009kk
E. Berti, V. Cardoso and A. O. Starinets,
Quasinormal modes of black holes and black branes.
Class. Quant. Grav. 26 (2009), 163001.
doi:10.1088/0264-9381/26/16/163001
|
http://arxiv.org/abs/2307.05675v1 | 20230711180003 | Analysis of Chaos and Regularity in the Open Dicke Model | [
"David Villaseñor",
"Pablo Barberis-Blostein"
] | quant-ph | [
"quant-ph",
"cond-mat.stat-mech",
"nlin.CD"
] |
Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, C.P. 04510 CDMX, Mexico
We introduce a criteria to numerically find the complex spectrum of the open Dicke model and present a detailed analysis when dissipation is due to cavity losses. We select two case studies where the classical isolated system shows regularity and where chaos appears. To characterize the open system as regular or chaotic we study regions of the spectrum taking windows over the absolute value of its eigenvalues. Our results agree with the Grobe-Haake-Sommers (GHS) conjecture for Markovian dissipative open quantum systems, finding the expected 2D Poisson distribution for regular regimes, and the distribution of the Ginibre unitary ensemble (GinUE) for the chaotic ones, respectively.
Analysis of Chaos and Regularity in the Open Dicke Model
Pablo Barberis-Blostein
Indian Institute of Technology Kharagpur
========================================================
§ INTRODUCTION
The way to characterize the chaotic behavior in isolated quantum systems comes from the classical realm. Classically, the concept of chaos is explained as a strong sensibility to initial conditions. This sensibility is typically measured with the Lyapunov exponent, a rate of divergence between two initially trajectories, which separate between them as time evolves <cit.>. The extension of the last idea cannot be directly made in the quantum realm, due to the nature of quantum mechanics. Instead, the spectral fluctuations of quantum systems have traditionally been studied, through statistical tests of their eigenvalue spacings <cit.>.
For integrable (regular) quantum systems, the eigenvalue spacings generically follow the Poisson distribution associated with uncorrelated levels, as stated by the Berry-Tabor conjecture <cit.>. On the other hand, for non-integrable (chaotic) quantum systems with time-reversal symmetry, the spacings follow the Wigner distribution (Wigner surmise) associated with level repulsion. The last was conjectured by Bohigas, Giannoni, and Schmit for quantum systems whose classical limit is chaotic <cit.>, and whose spectral fluctuations are described by the Gaussian orthogonal ensemble (GOE) of the random matrix theory <cit.>. The last characterization is also applicable for systems without a well-defined classical limit <cit.>.
Open quantum systems in the Markovian approximation are studied through a Lindblad master equation, where the dissipation channels take specific forms <cit.>. In this formalism, the dynamics of the system is dictated by an operator called Liouvillian, which is in general non-Hermitian and has complex eigenvalues <cit.>. The fact that the spectrum of the Liouvillian is complex does not allow a simple generalization of the criteria that characterize the chaotic behavior in open quantum systems, as occurs in isolated systems.
Pioneering studies trying to understand the chaotic nature of open quantum systems were performed in periodically kicked dissipative tops with classical limit <cit.>, where was found that the distribution of the complex-eigenvalue spacings, understood as the Euclidean distance in the complex plane, follows a 2D (two-dimensional) Poisson distribution when the classical model is regular. The last point is understandable at some extent, since the intuitive extrapolation from isolated regular systems suggests that the spacings must be uncorrelated in the plane. In contrast, the spacing distribution, when the classical model is chaotic, was found to agree with the distribution of the Ginibre unitary ensemble (GinUE) <cit.>, showing a cubic level repulsion <cit.>.
The extrapolation of these results to any dissipative quantum system is nowadays called the Grobe-Haake-Sommers (GHS) conjecture for open quantum systems. It has been shown to be satisfied in other open quantum systems and seems to be universal <cit.>.
In this work, we use the Dicke model to characterize the onset of chaos when dissipation is taken into account. The isolated Dicke model represents the simplest interacting radiation-matter system <cit.>. It was introduced originally to explain superradiance <cit.>, and in recent years it has been used in a broad variety of theoretical studies, including quantum phase transitions <cit.>, classical and quantum chaos <cit.>, quantum scarring <cit.>, quantum localization in phase space <cit.>, non-equilibrium quantum dynamics <cit.>, evolution of out-of-time-ordered correlators (OTOCs) <cit.>, connections between chaos, entanglement <cit.>, and thermalization <cit.>, among others.
The Dicke model can be experimentally realized with setups as diverse as superconducting circuits <cit.>, cavity assisted Raman transitions <cit.>, trapped ions <cit.>, and others. On the other hand, this model has a well-defined classical limit with two-degrees of freedom <cit.>; which, depending on the parameters and energy regions, can show regular or chaotic motion.
Some versions of the open Dicke model with cavity dissipation or collective atomic dissipation have been already studied. The theoretical aspects investigated in these works range from superradiance and quantum phase transitions <cit.>, to classical and quantum chaos <cit.>. Some studies have focused in a particular version of the model, as the two-photon open Dicke model <cit.>. Moreover, experimental realizations of the open Dicke model with optical cavities are shown in Refs. <cit.>.
The main goal in this work is to propose a method to study the complex spectrum of the open Dicke model and characterize the onset of chaos in the system. Due to the infinite-dimensional Liouville space of the open Dicke model, a truncation must be made, which introduces error in the solutions, when the system is solved numerically. In this regard, we propose a convergence criterion for eigenstates and eigensvalues of the system, to discriminate in a consistent way those that are not real solutions of the system. Achieving the last condition, we apply the standard methodology of open quantum systems to reveal the appearance of chaotic behavior in the open Dicke model. Despite there are some studies focused in the classical limit of the open Dicke model <cit.>, in this work we restrict ourselves to the quantum treatment of chaos.
The article is organized as follows. In Sec. <ref>, we introduce the isolated Dicke model, represented by the Dicke Hamiltonian, and its more important features. Next, we introduce the open Dicke model, represented by the Dicke Liouvillian, which defines the evolution of the system density matrix through a Lindblad master equation in the Markovian approximation. We mention the most important features of the open system also, as the dissipative phase transition. In Sec. <ref>, we propose a convergence criterion for the eigenstates and eigenvalues of the Dicke Liouvillian, which is implemented via exact diagonalization. In Sec. <ref>, we show the standard procedures to perform the spectral analysis in open quantum systems, as well as a brief review of the spectral analysis in isolated quantum systems. The main results of the article concerning chaos and regularity in the open Dicke model are shown in Sec. <ref>. Finally, our conclusions are summarized and presented in Sec. <ref>.
§ OPEN DICKE MODEL
The Dicke model, which describes the interaction between a set of 𝒩 two-level atoms and a single-mode electromagnetic field without dissipation channels (isolated system), is represented by the Hamiltonian (setting ħ=1) <cit.>
Ĥ_D = ωâ^†â+ω_0Ĵ_z+γ/√(𝒩)(â^†+â)(Ĵ_++Ĵ_-),
where â^† (â) is the bosonic creation (annihilation) operator of the field mode. The set of operators {â^†,â,1̂} satisfy the (Heisenberg-Weyl) H(1) algebra. Moreover, Ĵ_+ (Ĵ_-) is the raising (lowering) collective pseudo-spin operator, defined as Ĵ_±=Ĵ_x± iĴ_y, where Ĵ_x,y,z=(1/2)∑_k=1^𝒩σ̂_x,y,z^k are the collective pseudo-spin operators and σ̂_x,y,z are the Pauli matrices. The set of operators {Ĵ^+,Ĵ_-,Ĵ_z} satisfy the SU(2) algebra in the same way as the Pauli matrices.
The Dicke Hamiltonian can be studied in invariant subspaces specified by the eigenvalues j(j+1) of the squared total pseudo-spin operator Ĵ^2=Ĵ_x^2+Ĵ_y^2+Ĵ_z^2. We use the totally symmetric subspace, which is defined by the maximum pseudo-spin value j=𝒩/2 and includes the ground state. Furthermore, the Dicke Hamiltonian possesses a parity symmetry, [Ĥ_D,Π̂]=0, where the parity operator Π̂ = exp[iπ(â^†â+Ĵ_z+j1̂)] identifies states in two sectors of well-defined parity.
The main parameters of the Dicke Hamiltonian are the radiation frequency of the single-mode electromagnetic field, ω, the atomic transition frequency from the ground state to the first excited state, ω_0, and the coupling strength, γ, which modules the atom-field interaction within the system and reaches the critical value γ_c=√(ωω_0)/2. At this value, the system develops a quantum phase transition going from a normal (γ<γ_c) to a superradiant (γ>γ_c) phase <cit.>. Classically, the Dicke model displays regular or chaotic behavior depending on the latter Hamiltonian parameters (ω,ω_0,γ) and the excitation energies <cit.>.
When dissipation due to cavity losses is included in the system, the open Dicke model can be studied with the standard treatment in the Markovian approximation for open quantum systems, through a Lindblad master equation of the form (setting ħ=1) <cit.>
dρ̂/dt = ℒ̂_Dρ̂ = -i[Ĥ_D,ρ̂] + κ(2âρ̂â^†-{â^†â,ρ̂}),
where ρ̂ defines the state of the system in the Liouville space (density matrix of the system in the Hilbert space of the isolated system), κ is the cavity decay coupling, and ℒ̂_D is the Liouville superoperator or Dicke Liouvillian, which acts over states in the Liouville space (operators in the Hilbert space of the isolated system). The study presented here could be extended to more general dissipation channels, as those including collective atomic decay or considering temperature effects <cit.>. Moreover, the Dicke Liouvillian inherits a weak-parity symmetry of the Hamiltonian <cit.>, since [ℒ̂_D,𝒫̂]=0, where the parity superoperator 𝒫̂ρ̂=Π̂ρ̂Π̂^† identifies states with well-defined parity in the Liouville space.
On the other hand, when cavity dissipation is considered in the open Dicke model, a quantum dissipative phase transition takes place at the critical coupling strength <cit.>
γ_c^os=√(ωω_0)/2√(1+κ^2/ω^2),
defining, analogously to the isolated system, two phases in the open system, a normal (γ<γ_c^os) and a superradiant (γ>γ_c^os) dissipative phase, respectively.
In this work, we use dimensionless Hamiltonian parameters scaled to the cavity decay coupling κ, (ω,ω_0,γ)=(ω/κ,ω_0/κ,γ/κ). For convenience, we remove the tilde from the last scaled parameters. We choose the resonant frequency case ω=ω_0=1, such that, the critical coupling strength value of the isolated and open system is γ_c=0.5 and γ_c^os=1/√(2)≈0.707, respectively. With the selected parameters, we can consider two case studies, one with a coupling strength in the normal phase (γ=0.2), and another one in the superradiant phase (γ=1) of the open system. For these values, the classical isolated system shows regular and chaotic motion, respectively <cit.>. Moreover, in order to perform the spectral analysis we select the sector of eigenstates (eigenvalues) with positive parity in the Liouville space (see App. <ref> for an explanation of the Dicke Liouvillian with well-defined parity). To begin with a systematic analysis, we choose the smallest system size j=1 (𝒩=2 atoms).
§ CONVERGENCE OF EIGENVALUES AND EIGENSTATES OF THE OPEN DICKE MODEL
The isolated Dicke model has an infinite-dimensional Hilbert space composed by a finite atomic subspace with dimension 2j+1, and an infinite bosonic subspace. To solve this model numerically, a truncation of the Hilbert space by a finite value of the photon number n_max is needed, generating this way a finite Hilbert space with dimension 𝒟_H=(2j+1)(n_max+1). In the Liouville representation, the new space (Liouville space) is also infinite, and a truncation of the same is needed to solve it numerically. A finite Liouville space can be obtained through the truncated Hilbert space, where the Liouville basis is composed by all the projectors of the basis states of the truncated Hilbert space. Thus, the Liouville space dimension is the square of the Hilbert space dimension 𝒟_L=𝒟_H^2. Nevertheless, the eigenstates and eigenvalues of the truncated matrix do not necessarily are eigenstates and eigenvalues of the full infinite matrix. In this section we introduce a convergence criterion to find eigenstates (eigenvalues) that are numerically close to the eigenstates (eigenvalues) of the full matrix.
§.§ Convergence of Eigenvalues
A usual way to define convergence of eigenvalues in infinite-dimensional spaces is comparing the change of the eigenvalues ε_k for two truncation values, n_max and n_max+1,
Δε_k=|ε_k^n_max+1-ε_k^n_max| ≤ϵ,
where ϵ is a tolerance value. Thus, the eigenvalue ε_k is rejected when the change Δε_k exceeds the threshold ϵ.
This method has been successfully tested in the eigenvalues of the Dicke Hamiltonian <cit.>, but the implementation becomes computing demanding when the truncation size of the Hamiltonian matrix increases, since two diagonalizations are needed. For this reason, an alternative convergence criterion based on the eigenstates of the truncated Hamiltonian matrix was proposed, showing an equivalence with the eigenvalue convergence criterion and using only one diagonalization of the system <cit.>.
When extending the eigenvalue convergence criterion to the Dicke Liouvillian, we found it is not applicable. We suspect that the complex spectrum of the Dicke Liouvillian is a continuous spectrum, since each diagonalization of the system with a fixed truncation value n_max generates a new set of eigenvalues, which were not present in the previous diagonalization with a smaller truncation value. It is well known that for complex non-Hermitian matrices, always there is a set of eigenvalues missed in the complex plane <cit.>.
§.§ Convergence of Eigenstates
Since the eigenvalue convergence criterion is not applicable to the Dicke Liouvillian, we propose in this work an extension of the eigenstate convergence criterion of the Dicke Hamiltonian for the Dicke Liouvillian. A detailed description of this criterion for the Dicke Hamiltonian and its validity are presented in Ref. <cit.>. The eigenstates of Dicke Hamiltonian can be expanded in an arbitrary basis. Typically the Fock basis |f⟩=|n;j,m_z⟩ (with n=0,1,…,n_max and m_z=-j,-j+1,…,j-1,j) is used to diagonalize the Hamiltonian and the eigenstates of the system, Ĥ_D|E_k⟩=E_k|E_k⟩, have the following representation
|E_k⟩ = ∑_f=1^𝒟_Hc_f^k|f⟩ = ∑_n=0^n_max∑_m_z=-j^jc_n,m_z^k|n;j,m_z⟩,
where c_f^k=⟨ f|E_k⟩ or c_n,m_z^k=⟨ n;j,m_z|E_k⟩, and the eigenstates are arranged in increasing order of their real eigenvalues E_k≤ E_k+1, E_k∈ℝ.
The probability to have n photons in the eigenstate |E_k⟩ is given by
p_n^k = ∑_m_z=-j^j|⟨ n;j,m_z|E_k⟩|^2 = ∑_m_z = -j^j|c_n,m_z^k|^2,
and when this probability is evaluated for all the values n=0,1,…,n_max, it can be interpreted as a projection of the eigenstate wave function over the Fock basis. In this way, an eigenstate convergence criterion is proposed as to have zero probability for the maximum number of photons n_max, or equivalently, to have the eigenstate wave function contained in the truncated Hilbert space, that is, all coefficients contributing to the wave function must be contained inside the truncated Hilbert space
p_n_max^k = ∑_m_z=-j^j|c_n_max,m_z^k|^2≤δ,
where δ is a tolerance value.
The last convergence criterion can be extended for the eigenstates of the Dicke Liouvillian. By considering the Liouville basis as the projectors of the Fock basis |f',f⟩⟩=|f'⟩⟨ f|=|n';j,m'_z⟩⟨ n;j,m_z| (with n',n=0,1,…,n_max and m'_z,m_z=-j,-j+1,…,j-1,j), we can diagonalize the Liouvillian and the eigenstates of the open system, ℒ̂_D|λ_k⟩⟩=λ_k|λ_k⟩⟩, take the form
|λ_k⟩⟩ = ∑_f',f=1^𝒟_Hc_f',f^k|f',f⟩⟩ = ∑_f'=1^𝒟_H∑_f=1^𝒟_Hc_f',f^k|f'⟩⟨ f|
= ∑_n',n=0^n_max∑_m'_z,m_z=-j^jc_n',m'_z,n,m_z^k|n';j,m'_z⟩⟨ n;j,m_z|,
where c_f',f^k=⟨⟨ f',f|λ_k⟩⟩, and the eigenstates are arranged in increasing order of their complex-eigenvalue absolute values |λ_k|≤|λ_k+1|, λ_k∈ℂ.
Analogously to the isolated system, we can define an extension of Eq. (<ref>) for the eigenstate |λ_k⟩⟩ of the open system, such that, we have two weight distributions
P_1,n'^k = ∑_n=0^n_max∑_m'_z,m_z = -j^j|c_n',m'_z,n,m_z^k|^2,
P_2,n^k = ∑_n'=0^n_max∑_m'_z,m_z = -j^j|c_n',m'_z,n,m_z^k|^2,
which can be interpreted as projections of the eigenstate wave function over the Liouville basis for all the values n',n=0,1,…,n_max.
Thus, the extension of the eigenstate convergence criterion for the Dicke Liouvillian is proposed as to have zero contribution of the weight distribution for the maximum number of photons n_max, or to have the eigenstate wave function contained in the truncated Liouville space for both projections
P_1,n_max^k = ∑_n=0^n_max∑_m'_z,m_z = -j^j|c_n_max,m'_z,n,m_z^k|^2≤Δ,
P_2,n_max^k = ∑_n'=0^n_max∑_m'_z,m_z = -j^j|c_n',m'_z,n_max,m_z^k|^2≤Δ,
where Δ is a tolerance value.
§.§ Convergence of Eigenstates vs. Eigenvalues
In order to show that the convergence criterion proposed in the last section for eigenstates of the Dicke Liouvillian can be used to select their corresponding well-converged eigenvalues, we present in this section numerical results implementing it.
We select a set of truncation values of the Liouville space n_max=10,20,30,40,50,60 and diagonalize the Dicke Liouvillian for the parameters presented previously ω=ω_0=j=κ=1. We perform this procedure for two cases γ=0.2 and γ=1 to ensure we are diagonalizing the system in the normal (γ<γ_c^os) and superradiant (γ>γ_c^os) dissipative phase, respectively.
We use the Liouville basis with positive parity (P=+1) to perform the convergence analysis and at the same time, to perform the spectral analysis in Sec <ref>. See App. <ref> for a complete description on how to diagonalize the Dicke Liouvillian using the Liouville basis and how to select the basis with well-defined parity. For the well-defined parity Liouville basis, the dimension of the Liouville space 𝒟_L,P (which defines the number of eigenstates N_ES) is given by
N_ES = 𝒟_L,P=±1
= {[ 𝒟_H^2/2 if (-1)^n_max=-1; (𝒟_H^2±1)/2 if (-1)^n_max=1 ].,
where 𝒟_H=(2j+1)(n_max+1) is the dimension of the Hilbert space of the isolated system.
In Fig. <ref> (a) we show the number of well-converged eigenstates N_CES selected under the eigenstate convergence criterion (see Eqs. (<ref>) and (<ref>)) with a tolerance value Δ=10^-3, for all the truncation values n_max=10,20,30,40,50,60. By fitting the numerical results, we find a quadratic behavior for the number of converged eigenstates
N_CES = A_1n_max+A_2n_max^2,
where A_1=-11.49,-15.60 and A_2=3.28,2.77 identify the fitting values for each coupling strength γ=0.2,1.
Using the last analytical expressions, we can find their asymptotic value in the limit n_max→∞ for the ratio of converged eigenstates to the total number of eigenstates obtained in each implementation
lim_n_max→∞N_CES/N_ES = 2A_2/(2j+1)^2.
For the parameters A_2=3.28,2.77 we find the asymptotic values 0.729 and 0.616, respectively. In Fig. <ref> (b) we show the ratio N_CES/N_ES as a function of the truncation value n_max, with their corresponding asymptotic value for the cases γ=0.2,1. We see in this figure, that the fraction of converged eigenstates is bounded for both cases, tending asymptotically to a constant value in the limit n_max→∞.
Now, we consider an eigenvalue λ_k well converged, when its corresponding eigenstate |λ_k⟩⟩ fulfills the last criterion. Thus, we take the converged eigenvalues for the set n_max=10,20,30,40,50,60 and present the histogram of their absolute value |λ_k| in Fig. <ref> for both cases γ=0.2,1. These histograms can be interpreted as a density of states for the absolute value |λ_k|. In both cases γ=0.2,1, we find that increasing the truncation value n_max preserves the behavior of the density of states. Furthermore, we see a number of converged eigenvalues higher for the low coupling strength γ=0.2 than for the high one γ=1. The last finding is intuitive extrapolating from the isolated system. In general, there are less converged eigenstates (eigenvalues) when the coupling strength is high in the system, since the eigenstate wave functions are more spread in the diagonalization basis and the convergence criterion is more difficult to be fulfilled <cit.>.
It is important to highlight that, when the complete sets of eigenvalues (converged and not converged) are taken into account, the overall statistical behavior is the same for all the truncation values of the Liouville space. That is, our method only discriminates eigenvalues, whose eigenstates do not fulfill the proposed eigenstate convergence criterion (see Eqs. (<ref>) and (<ref>)), but the overall statistical behavior of the eigenvalues remains the same. The last feature allows us to work with a reasonable convergence criterion for eigenstates (eigenvalues) of the Dicke Liouvillian.
To explain deeper our convergence criterion, we focus on a single truncation value, the higher one n_max=60, for which we have obtained N_CES=11030,9165 converged eigenstates (eigenvalues) for the cases γ=0.2,1 with a tolerance value Δ=10^-3. In Fig. <ref> we show the case γ=0.2, where the complex spectrum ordered by the eigenvalue absolute value |λ_k| is presented in panel (a1), where the black dots represent the complete set of eigenvalues, while the blue dots represent the converged ones.
Panels (a2)-(a3) in Fig. <ref> show the convergence criterion computed for all eigenstates |λ_k⟩⟩ (see Eqs. (<ref>) and (<ref>)). In the same way, the black dots represent the criterion computed for all the set of eigenstates, while the blue dots for the converged ones. In these panels a fraction of eigenstates for which the criterion is apparently fulfilled can be seen beyond k=15000 and even for the region k>10000, where the lack of convergence arises. Nevertheless, to avoid ambiguities by selecting them, and recalling that they are ordered with the increasing eigenvalue absolute value |λ_k|, we select them until the first eigenstate does not fulfill the criterion, discarding the remaining ones.
In Fig. <ref> (b1) we show the spectrum in the complex plane, where it is more obvious the region of the well-converged spectrum as a stain. In this panel and panels (a1)-(a3), a 3D diamond representing a particular eigenvalue (eigenstate) with label k=6000 is shown. In panels (b2)-(b3) we show its wave function projections over the Liouville basis (see Eqs. (<ref>) and (<ref>)), where we can see, as we have argued previously, that the wave function is contained in the truncated Liouville space for both projections, ensuring this way its convergence.
We repeat the last analysis for the case γ=1, showing the results in Fig. <ref>. For this case, we see the same overall behaviour, with slightly differences. For the high coupling strength case we find less converged eigenvalues as for the low one. The last feature is well understood by analyzing panel (b2) in Fig. <ref>. For this projection, the eigenstate wave function looks more spread over the Liouville basis. This is feature which show typically chaotic eigenstates, they are expanded in all accessible Hilbert (Liouville) space, resembling a random state <cit.>. Panel (b3) in Fig. <ref> shows the projection of the eigenstate wave function not spread at all over the Liouville basis, which shows that the eigenstate wave function has a very complex structure in Liouville space, and these projections are useful tools to understand it.
§ SPECTRAL ANALYSIS AND QUANTUM CHAOS IN OPEN QUANTUM SYSTEMS
The way to perform the spectral analysis for open quantum systems with complex spectra was first outlined in Ref. <cit.>. The studies were latter extended to other open quantum systems, which suggest that the behavior regarding regularity and chaos in dissipative systems is universal <cit.>. We follow the procedure exposed in these references in the present work.
§.§ Eigenvalue Spacing Distributions for Regular and Chaotic Complex Spectra
In isolated quantum systems, where the Hamiltonians are Hermitian with real eigenvalues, ε_k∈ℝ, the notion of spacing comes from the fact that we can order a finite set of eigenvalues in increasing order, ε_k≤ε_k+1, where the spacing is defined as the separation between an eigenvalue ε_k and its nearest neighbor ε_k+1, s_k=ε_k+1-ε_k. Performing an unfolding procedure of the spectrum, we can study its spectral fluctuations using the nearest-neighbor spacing distribution, which follow typically the Poisson distribution, P_P(s)=exp(-s), for integrable (regular) systems and the Wigner-Dyson surmise, P_WD(s) = (π/2)s exp(-π s^2/4), for the non-integrable (chaotic) ones <cit.>.
For open quantum systems, the Liouvillians are non-Hermitian and the eigenvalues are complex, φ_k∈ℂ, such that, the standard treatment to analyze spectral fluctuations is not applicable anymore. As the eigenvalues are complex numbers, they can be plotted in the complex plane (see Fig. <ref> (b1) and Fig. <ref> (b1) for the complex eigenvalues of the Dicke Liouvillian). For these systems, the spacing is understood as the minimal Euclidean distance in the complex plane for an eigenvalue φ_k and its nearest neighbor φ_k^1N, s_k=|φ_k-φ_k^1N|. After performing an unfolding procedure for complex spectra (see App. <ref> for a complete explanation of this technique), we can study analogously to the isolated systems, the spectral fluctuations in the open systems.
Typically, the nearest-neighbor spacing distribution for integrable (regular) open quantum systems follows a 2D Poisson distribution <cit.>, which is given by
P_2DP(s) = π/2s e^-π s^2/4.
Note that this distribution is functionally the same as the Wigner-Dyson surmise, which characterizes the chaotic cases in isolated quantum systems.
On the other hand, for non-integrable (chaotic) open quantum systems, the nearest-neighbor spacing distribution follows the distribution of the Ginibre unitary ensemble (GinUE) <cit.>, given by
P_GinUE(s) = ∏_k=1^∞Γ(1+k,s^2)/k!×∑_k'=1^∞2s^2k'+1e^-s^2/Γ(1+k',s^2),
where Γ(k,z) = ∫_z^∞dt t^k-1e^-t is the incomplete Gamma function, ∫_0^∞ ds P_GinUE(s)=1, and s̅ = ∫_0^∞ ds s P_GinUE(s)≈1.1429. In order to compare this distribution with numerical values, a scaling must be made to ensure its first moment is unity
P_GinUE(s) = s̅P_GinUE(s̅s),
with ∫_0^∞ ds P_GinUE(s) = 1 and ∫_0^∞ ds s P_GinUE(s) = 1.
In the limit s→0, both distributions tend to the power law
P_β(s) ∝ s^β,
where the power β=1,3 identifies the degree of level repulsion, linear (regular) for integrable cases and cubic for non-integrable (chaotic) ones, which seems to be universal in open quantum systems <cit.>.
In order to corroborate that a data set comes from a given distribution, the well-known Anderson-Darling test can be implemented for the spacings s_k=|φ_k-φ_k^1N|, by computing the parameter <cit.>
A^2 = - N - ∑_k=1^N2k-1/N(ln[F_X(s_k)] + ln[1-F_X(s_N+1-k)]),
where the spacings are arranged in increasing order s_k≤ s_k+1, and F_X(s) = ∫_0^sds'P_X(s') is the cumulative distribution function of the probability distribution P_X(s) with X=2DP,GinUE. When the Anderson-Darling parameter is greater than a threshold, A^2>2.5, we can conclude with 95% of confidence that the data set does not come from the given probability distribution.
§.§ Ratio of Consecutive Eigenvalue Spacings for Complex Spectra
The ratio of consecutive eigenvalue spacings was introduced to study spectral fluctuations in isolated systems with real eigenvalues <cit.>. The advantage of this test is that the spectra can be studied without implementing unfolding procedures, which can be ambiguous in some cases. The last test can be extended to open systems with complex eigenvalues. The procedure is detailed in Ref. <cit.>, where the complex ratio takes the form
Z_k=r_ke^iθ_k=φ_k^1N-φ_k/φ_k^2N-φ_k,
where φ_k^1N and φ_k^2N are the first and second nearest neighbor of an eigenvalue φ_k, respectively.
The generic results from isolated quantum systems, where the eigenvalues of integrable quantum systems are uncorrelated (Poisson distribution) and those of non-integrable ones show level repulsion (Wigner-Dyson surmise), can be extended to open quantum systems. In open quantum systems, the sets of eigenvalues of integrable systems (2D Poisson distribution) are uncorrelated in the complex plane showing a flat (delocalized) distribution. On the other hand, the sets of eigenvalues of non-integrable systems (GinUE distribution) shows level repulsion, which manifests itself with a suppression of the distribution at the origin and small angles <cit.>.
The expectation values of r_k=|Z_k| and cos(θ_k)=Re(Z_k)/r_k can be computed using the marginal distributions from each distribution 2DP and GinUE. The following results are obtained ⟨ r_k⟩_X = 2/3, 0.74 and -⟨cos(θ_k)⟩_X = 0, 0.24 with X=2DP,GinUE; which can be used as a benchmark to validate numerical results.
§ CHAOS AND REGULARITY IN THE OPEN DICKE MODEL
In this section we show numerical results characterizing the complex spectrum of the Dicke Liouvillian as chaotic or regular. To achieve this goal we choose the spectrum computed with the highest truncation value n_max=60, which was already presented in Sec. <ref>. For this truncation value (n_max=60) we get N_CES=11030,9165 converged eigenstates (eigenvalues) for the coupling strengths γ=0.2,1, with a tolerance value Δ=10^-3 (see Fig. <ref> (a1) and Fig. <ref> (a1), respectively).
As a first case study, we present the case γ=1, whose isolated classical system shows chaotic behavior <cit.>. The way to study the complex spectrum of the Dicke Liouvillian must be deal with care, since there is a well-studied characterization of the real spectrum of the Dicke Hamiltonian for coupling strengths in the superradiant phase (γ>γ_c), where at low energies the spectral fluctuations are regular (Poisson distribution), while for high energies the spectral fluctuations become chaotic (Wigner-Dyson surmise) <cit.>. That is, the real spectrum of the isolated system transits from regularity at low energies to chaos at high energies. The last can be understood also from the classical Dicke Hamiltonian, where at low energies near the ground state energy, the system can be approximated as a harmonic oscillator obtaining regular motion; while at high energies, the system develops chaotic motion <cit.>.
Based on this background, it is intuitive that the open system could inherit some properties of the isolated system; that is, the complex spectrum of the Dicke Liouvillian must be studied by regions, where the chaotic or regular behavior can arise, as occurs with the real spectrum of the Dicke Hamiltonian. The way to detect these regions is not obvious, since we do not have studied yet a classical limit of the open Dicke model, which could serve as a benchmark.
Next, we propose to study the complex spectrum of the Dicke Liouvillian taking regions organized by the increasing absolute value of its eigenvalues, as shown in Fig. <ref> (b). As was argued previously, these histograms can be interpreted as a density of states for the eigenvalue absolute value |λ_k|. We take moving windows of 500 consecutive eigenvalues and apply the Anderson-Darling test to the eigenvalues contained in these windows, by computing the Anderson-Darling parameter (see Eq. (<ref>)). The moving windows run from the lowest absolute value to the largest, covering all the converged spectrum. This procedure is shown in Fig. <ref> (a), where the Anderson-Darling test was computed for the 2D Poisson (blue curve) and the GinUE (red curve) distribution, respectively (see Eqs. (<ref>) and (<ref>)). In this figure is clear that the complex spectrum of the Dicke Liouvillian is always well determined by the GinUE distribution along the eigenvalue absolute value with slightly deviations, confirming the chaotic behavior of the spectrum for high coupling strengths.
The last affirmation is corroborated by plotting the spacing distribution of the eigenvalues contained in two selected windows with mean value |λ|=18.9,63.5 in Figs. <ref> (b1)-(b2), respectively. In both panels we can see that the eigenvalues contained in each window follow the GinUE distribution (see Eq. (<ref>)), since the Anderson-Darling parameter does not cross the threshold A^2=2.5. Furthermore, we compute the complex ratio of consecutive eigenvalue spacings for the eigenvalues contained in the same windows (see Eq. (<ref>)), and plot them in Figs. <ref> (c1)-(c2). We can see in both panels that the point distribution is avoided at the origin as expected, and it looks fuzzy for small angles. The same panels show the numerical expectation values ⟨ r_k⟩ and -⟨cos(θ_k)⟩ for each set of eigenvalues, which seem to agree with the theoretical expectation values from the GinUE distribution. The deviations are attributed to the low quantity of eigenvalues contained in each window, which must be suppressed when the system size increases; or instead, when the windows are wider containing more eigenvalues.
Now, we present the second case γ=0.2, where the isolated classical system shows regular motion <cit.>. For the real spectrum of the Dicke Hamiltonian with coupling strengths in the normal phase (γ<γ_c), the spectral fluctuations are generally regular (Poisson distribution) <cit.>. Nevertheless, we follow the same method of studying the complex spectrum of the Dicke Liouvillian taking regions organized by the increasing eigenvalue absolute value (see Fig. <ref> (a)).
We take the same moving windows of 500 consecutive eigenvalues and apply the same procedure described above for the case γ=1. In Fig. <ref> (d) we show the Anderson-Darling test, where we see a more interesting behavior as in the previous case. For the low coupling case we see that the complex spectrum of the Dicke Liouvillian behaves regular following the 2D Poisson distribution at low eigenvalue absolute values, confirming the regular behavior of the spectrum. However, there is a transition region where this integrability breaks around a value |λ|∼30. After this value, there are some fluctuations until the chaotic behavior of the spectrum seems to be reached for values |λ|>60. The last finding is important, since it suggests that the low coupling strength in the system does not guarantee the regularity of the system for the full complex spectrum.
As in the previous case, we plot the spacing distribution of the eigenvalues contained in two selected windows with mean value |λ|=18.2,64.5 in Figs. <ref> (e1)-(e2), respectively. Here, we can see that the eigenvalues contained in the first window follow the 2D Poisson distribution, while the second one seems to follow the GinUE distribution (see Eqs. (<ref>) and (<ref>)). For the first window, the Anderson-Darling parameter does not cross the threshold A^2=2.5, while for the second window it is in the limit. Moreover, we plot in Figs. <ref> (f1)-(f2) the complex ratio of consecutive eigenvalue spacings for the eigenvalues contained in the corresponding windows. We can see in panel (f1) that the point distribution is delocalized over the complex plane as expected, while in panel (f2) the point distribution seems to be avoided at small angles, not at all at the origin. We compute for both cases, the numerical expectation values ⟨ r_k⟩ and -⟨cos(θ_k)⟩ for each set of eigenvalues. The first case seems to agree with the theoretical expectation values from the 2D Poisson distribution, while the second one with the GinUE distribution. The same deviations are attributed to the low quantity of eigenvalues contained in each window.
Now, we remake the previous analysis increasing the system size and corroborate our statements. We take the system size j=3 (𝒩=6 atoms) with the same truncation value n_max=60, obtaining N_CES=36989,59001 converged eigenstates (eigenvalues) for the coupling strengths γ=0.2,1, with a tolerance value Δ=10^-3.
In Fig. <ref> we show the results, first for the coupling strength γ=1 and then for γ=0.2. To perform the Anderson-Darling test, we take moving windows of 1500 consecutive eigenvalues. For the case γ=1, we can see in Fig. <ref> (a) that the chaotic behaviour of the complex spectrum of the Dicke Liouvillian is again confirmed, where the deviations of the Anderson-Darling parameter computed for the GinUE distribution decrease, showing an almost constant curve. Furthermore, in Figs. <ref> (b1)-(b2) we plot the spacing distribution of the eigenvalues contained in two selected windows with mean value |λ|=20.2,54.2, respectively. We see that not only the spacing distributions follow the GinUE distribution, but also the ratio of consecutive eigenvalue spacings, plotted in Figs. <ref> (c1)-(c2) for both windows, shows clearer the avoided regions at the origin and at small angles as expected. In the same way, we see that the agreement of the numerical expectation values ⟨ r_k⟩ and -⟨cos(θ_k)⟩ with the theoretical ones improves as we have argued.
Taking the same moving windows of 1500 consecutive eigenvalues for the case γ=0.2, we can see in Fig. <ref> (d) the Anderson-Darling test, which confirms the regular behavior of the complex spectrum of the Dicke Liouvillian at low eigenvalue absolute values. Furthermore the transition to chaos is confirmed at high eigenvalue absolute values. This is an interesting feature of the open system, since the transition to chaos is developed slowly until the system behaves chaotic.
For the eigenvalues contained in two selected windows with mean value |λ|=20.3,76.5, we plot in Figs. <ref> (e1)-(e2) the spacing distribution. The first window follows the 2D Poisson distribution, while the second one follows the GinUE distribution, confirmed by the Anderson-Darling parameter. Moreover, the ratio of consecutive eigenvalue spacings for both windows is plotted in Figs. <ref> (f1)-(f2), showing the complete delocalization of the point distribution in the complex plane for the first window and the avoided regions at the origin and at small angles for the second one. As a matter of fact, we also see a better agreement of the numerical expectation values ⟨ r_k⟩ and -⟨cos(θ_k)⟩ with the theoretical ones.
§ CONCLUSIONS
We have implemented a convergence criterion for the eigenstates and eigenvalues of the Dicke Liouvillian based in the eigenstate wave functions spread over the Liouville basis. We see that this method seems to be a reasonable proposal, since the number of converged solutions increases with the truncation value of the Liouville space dimension and the statistical behavior of the well-converged eigenvalues remains the same for each truncation value of the Liouville space. Moreover, the last criterion avoids the ambiguity when a convergence criterion based in the change of eigenvalues is implemented, since the complex spectrum of the Dicke Liouvillian seems to be continuous, and the increment of the Liouville space dimension leads to the appearance of new eigenvalues not contained in lower dimensions.
The onset of chaos in the open Dicke model was successfully characterized by applying the standard spectral analysis proposed for dissipative quantum systems. For the high coupling strength case (γ=1), we detected the GinUE distribution for the eigenvalue spacings, typical for chaotic open quantum systems for all range of the eigenvalue absolute value of the complex spectrum. On the other hand, for the low coupling strength case (γ=0.2), we identified a richer structure of the complex spectrum, since at low eigenvalue absolute values we detect the 2D Poisson distribution for the eigenvalue spacings, typical for regular open quantum systems. Nevertheless, there is a regime where this integrability is broken and the onset of chaos arises in the system, implying that low coupling strengths do not guarantee the regularity of the system for all the spectrum.
In general, we verify the GHS conjecture to be valid in the open Dicke model, confirming its universality, when the spectral analysis of the Dicke Liouvillian is done by regions of its eigenvalues. We think that these studies are a first step to characterize completely the phenomenon of chaos in the open Dicke model. Further investigations adding other kinds of dissipative channels, as collective atomic decay or temperature effects, are proposed as future work. We think also, that the methods developed in this work can be extended to other open quantum systems with infinite Liouville space.
§ ACKNOWLEDGMENTS
We thank Jorge G. Hirsch for the reading of this work and his valuable comments and suggestions. We acknowledge the support of the Computation Center - IIMAS, in particular to Adrián Chavesti. In the same way we acknowledge the support of the Computation Center - ICN, in particular to Enrique Palacios, Luciano Díaz, and Eduardo Murrieta. This work was supported by DGAPA-PAPIIT-UNAM under grant No. IG101421 from Mexico. D.V. acknowledges financial support from the postdoctoral fellowship program DGAPA-UNAM.
§ DIAGONALIZATION OF THE DICKE LIOUVILLIAN
The way to diagonalize a Liouvillian is using the tetradic notation <cit.>, where a matrix representation of the system can be obtained using an arbitrary basis of the form
|k,l⟩⟩ = |k⟩⟨ l|,
where |∙⟩⟩ denotes a vector in the Liouville space composed by all the projectors of the Hilbert-space states |∙⟩. For an N-dimensional basis of the Hilbert space, there will be an N^2-dimensional basis of the Liouville space.
Using the last procedure, the matrix representation of the Dicke Liouvillian takes the form
L_k'l',kl^D = ⟨⟨ k',l'|ℒ̂_D|k,l⟩⟩ = Tr{|l'⟩⟨ k'|ℒ̂_D|k⟩⟨ l|}
= ∑_i⟨ i|l'⟩⟨ k'|ℒ̂_D|k⟩⟨ l|i⟩ = L_k'l',kl^D,γ + L_k'l',kl^D,κ,
where
L_k'l',kl^D,γ = -i⟨⟨ k',l'|[Ĥ_D,ρ̂]|k,l⟩⟩
= -i(⟨ k'|Ĥ_D|k⟩δ_l,l'-⟨ l|Ĥ_D|l'⟩δ_k',k),
and
L_k'l',kl^D,κ = κ⟨⟨ k',l'|(2âρ̂â^†-{â^†â,ρ̂})|k,l⟩⟩
= 2κ⟨ k'|â|k⟩⟨ l|â^†|l'⟩ +
- κ(⟨ k'|â^†â|k⟩δ_l,l' + ⟨ l|â^†â|l'⟩δ_k',k).
§.§ Fock Basis and Liouville Basis
The standard way to diagonalize the Dicke Hamiltonian is using the Fock basis composed by Dicke states |j,m_z⟩ (with m_z=-j,-j+1,…,j-1,j) and Fock states |n⟩ (with n=0,1,…,∞) in tensor product
|f⟩ = |n;j,m_z⟩ = |n⟩⊗|j,m_z⟩,
where the index f(n,m_z)=(2j+1)n+m_z+j+1 reorders the elements of the basis with one value. As was mentioned previously, the Fock basis is infinite; nevertheless a truncation finite value n_max for the bosonic subspace is selected in order to solve the system numerically.
The Fock basis can be used to generate the diagonalization basis of the Dicke Liouvillian or Liouville basis
|f',f⟩⟩ = |f'⟩⟨ f| = |n';j,m'_z⟩⟨ n;j,m_z|,
where the matrix elements of the Dicke Liouvillian are given by
⟨ f'|Ĥ_D|f⟩ = (ω n+ω_0m_z)δ_n',nδ_m'_z,m_z +
γ/√(𝒩)(√(n+1)δ_n',n+1+√(n)δ_n',n-1)×
×(C_m'_z,m_z^+δ_m'_z,m_z+1 + C_m'_z,m_z^-δ_m'_z,m_z-1)
with C_m'_z,m_z^±=√(j(j+1)-m_z(m_z±1)), and
⟨ f'|â|f⟩ = √(n)δ_n',n-1δ_m'_z,m_z,
⟨ f'|â^†|f⟩ = √(n+1)δ_n',n+1δ_m'_z,m_z,
⟨ f'|â^†â|f⟩ = nδ_n',nδ_m'_z,m_z,
⟨ f'|ââ^†|f⟩ = (n+1)δ_n',nδ_m'_z,m_z.
§.§ Dicke Liouvillian with Well-Defined Parity
The Fock basis |f⟩ is an eigenbasis of the parity operator, Π̂ = exp[iπ(â^†â+Ĵ_z+j1̂)]
Π̂|f⟩ = p|f⟩,
with eigenvalues p = (-1)^(n+m_z+j) = ± 1, and allows to select a basis with well-defined parity in the Hilbert space. The last feature allows in the same way to select a basis with well-defined parity in the Liouville space, when the parity superoperator 𝒫̂ acts over the Liouville basis |f',f⟩⟩=|f'⟩⟨ f|
𝒫̂|f',f⟩⟩ = Π̂|f'⟩⟨ f|Π̂^† = P|f',f⟩⟩,
with eigenvalues P = (-1)^(n'+m'_z-n-m_z) = ± 1.
§ UNFOLDING OF COMPLEX SPECTRA
The unfolding of complex spectra is needed in order to remove system specific structures from it, in the same way as occurs in the real spectra, and can be implemented in different ways <cit.>. Following the method presented in Ref. <cit.>, the spectral density of states can be separated in an average (system specific) and a fluctuating (universal) part
ν(φ_k) = ∑_l=1^Nδ^(2)(φ_k-φ_k,l) = ν_a(φ_k)+ν_f(φ_k),
where the averaged spectral density of states is approximated by a sum of Gaussian functions near each complex eigenvalue φ_k∈ℂ of a set with N elements
ν_a(φ_k) ≈1/2πσ^2N∑_l=1^Ne^-|φ_k-φ_k,l|^2/(2σ^2),
where σ = 4.5S, S = N^-1∑_k=1^Ns_k, and the spacings s_k = |φ_k-φ_k^1N| are scaled as
s_k = √(ν_a(φ_k))/Ss_k,
with S = N^-1∑_k=1^N√(ν_a(φ_k))s_k.
|
http://arxiv.org/abs/2307.07550v1 | 20230714180004 | Primordial non-Gaussianity as a probe of seesaw and leptogenesis | [
"Chee Sheng Fong",
"Anish Ghoshal",
"Abhishek Naskar",
"Moinul Hossain Rahat",
"Shaikh Saad"
] | hep-ph | [
"hep-ph",
"astro-ph.CO",
"hep-th"
] |
[lime, fill=lime] (0,0)
circle [radius=0.2]
node[white] qagID;
[white, fill=white] (-0.0625,0.095)
circle [radius=0.007];
in A, ..., Zorcidhttps://orcid.org/orcidauthor
|
http://arxiv.org/abs/2307.04780v2 | 20230710082045 | Comparison of Point Cloud and Image-based Models for Calorimeter Fast Simulation | [
"Fernando Torales Acosta",
"Vinicius Mikuni",
"Benjamin Nachman",
"Miguel Arratia",
"Bishnu Karki",
"Ryan Milton",
"Piyush Karande",
"Aaron Angerami"
] | cs.LG | [
"cs.LG",
"hep-ex",
"hep-ph",
"nucl-ex",
"physics.ins-det"
] |
[email protected]
Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
National Energy Research Scientific Computing Center, Berkeley Lab, Berkeley, CA 94720, USA
Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
Berkeley Institute for Data Science, University of California, Berkeley, CA 94720, USA
Department of Physics and Astronomy, University of California, Riverside, CA 92521, USA
Thomas Jefferson National Accelerator Facility, Newport News, Virginia 23606, USA
Department of Physics and Astronomy, University of California, Riverside, CA 92521, USA
Department of Physics and Astronomy, University of California, Riverside, CA 92521, USA
Computational Engineering Division, Lawrence Livermore National Laboratory, Livermore CA 94550
Nuclear and Chemical Science Division, Lawrence Livermore National Laboratory, Livermore, CA 94550
Score based generative models are a new class of generative models that have been shown to accurately generate high dimensional calorimeter datasets. Recent advances in generative models have used images with 3D voxels to represent and model complex calorimeter showers. Point clouds, however, are likely a more natural representation of calorimeter showers, particularly in calorimeters with high granularity. Point clouds preserve all of the information of the original simulation, more naturally deal with sparse datasets, and can be implemented with more compact models and data files. In this work, two state-of-the-art score based models are trained on the same set of calorimeter simulation and directly compared.
Comparison of Point Cloud and Image-based Models for Calorimeter Fast Simulation
Aaron Angerami
Received / Accepted
================================================================================
§ INTRODUCTION
Detector simulations are essential tools for data analysis by connecting particle and nuclear physics predictions to measurable quantities. The most precise detector simulations are computationally expensive. This is especially true for calorimeters, which are designed to stop most particles and thus require modeling interactions from the highest accessible energies down to the lowest ones. Well-established experiments typically have bespoke fast simulations that capture the salient aspects of the precise simulations (usually based on Geant <cit.>) at a fraction of the computational cost. Traditionally, fast simulations are constructed to reproduce a series of low-dimensional observables. Furthermore, assembling an effective fast simulation is time intensive. If there was a way to build a fast simulation automatically and using the full detector dimensionality, then data analysis at existing and developing experiments could be greatly enhanced.
Deep learning (DL) has been used to build automated and high-dimensional fast simulations (`surrogate models') for calorimeters. Starting from Generative Adversarial Networks (GANs) <cit.> <cit.> and now including Variational Autoencoders <cit.> <cit.>, Normalizing Flows <cit.> <cit.>, and Diffusion Models <cit.> <cit.>, deep learning based calorimeter simulations have rapidly improved over the last years. They are even starting to be used in actual experimental workflows, such as the ATLAS Collaboration fast simulation <cit.>. The recent CaloChallenge <cit.> community comparison showcased the state-of-the-art methods deployed to increasingly granular current and future detectors. As segmented detectors, calorimeters are naturally represented as (possibly irregular) images. Nearly all proposed methods for DL-based calorimeter simulations are based on an image format (fixed grid of pixels). However, these data are unlike natural images in a number of ways, most notably in their sparsity. As such, image-based approaches pioneered in industry may not be the most effective for particle interactions.
Since most cells in a calorimeter image are empty, a more natural representation of these data may be a point cloud. Point clouds are a set of attributes assigned to locations in space. In the calorimeter case, the attribute is energy and the location is the cell coordinates. A calorimeter point cloud would require far fewer numbers to specify than an image representation, since only cells with non-zero energy would be recorded. The main challenges for point cloud models in contrast to image-based approaches is that they must cope with variable-length outputs that respect permutation invariance. With a lag compared to image-based approaches, point cloud generative models for particle/nuclear physics applications have seen a rapid development in recent years <cit.>. However, until recently, these models have never been applied to calorimeter simulations.
The first (and until now, only) publication describing point cloud generative models applied to calorimeters is Ref. <cit.>, which proposed generating Geant `hits' (deposits of energy) prior to their discritization into cells. This innovative idea enables the separation of material interactions from readout geometry. However, the number of hits vastly exceeds the number of non-zero cells which makes this task difficult. In this paper, we explore point cloud generative models applied directly to cell-level information. In other words, we take calorimeter images and compare state-of-the-art generative models that represent the same inputs as either images or (zero-suppressed) point clouds. As a case study, the two representations are compared using simulations of a high-granularity hadronic calorimeter, similar to the design planned for the ePIC detector at the future Electron-Ion Collider <cit.>.
This paper is organized as follows. Section <ref> describes the DL models used for the comparison. Both the image-based and point-cloud representations are generated with diffusion models in order to make the comparison as direct as possible. The simulation of the calorimeter dataset is found in Sec. <ref>. Discussion of the advantages and disadvantages of both representation, as well as numerical results are presented in Sec. <ref>. The paper ends with conclusions and outlook in Sec. <ref>.
§ DEEP LEARNING MODELS
Generative models for detector simulation aim to precisely emulate physics-based models, like those based on Geant, but using far less time than the full simulation. With 𝒪(100) detector components, neural network architectures solely based on fully connected layers can efficiently produce high fidelity samples, resulting in surrogate models thousands of times faster than the standard simulation routines <cit.>. For higher detector granularity (𝒪(1k) - 𝒪(10k)), the use of data symmetries becomes crucial to achieve precision. These can be directly included in the model design through dedicated neural network architectures or included in the data pre-processing <cit.>. For generative models such as normalizing flows, introducing flexible network architectures is often not trivial as the model invertibility and tractable Jacobian of the transformation places a strong restriction on the model design. A second difficulty is to achieve a stable training routine of the surrogate model. At finer granularities, neural network models tend to become larger to accommodate the data complexity, often resulting in unstable training schedules. This issue becomes more prominent in generative models such as variational autoencoders, where the latent space can vary rapidly, leading to an unstable response of the decoder network, or GANs, where the adversarial training requires careful tuning of the model hyperparameters to achieve a stable training.
Diffusion models are a class of generative neural networks that allow for stable training paired with high flexibility in the model design. Data is slowly perturbed over time using a time parameter t ∈ℝ that determines the perturbation level. The task of the neural network is to approximate the gradients of the log probability of the data, or the score function ∇_xp(x) ∈ℝ^D, based on data observations x∈ℝ^D in the D-dimensional space. This can be approximated by a denoising score-matching strategy <cit.>. In the implementation used in this paper, data observations x∼ p_data(x) are perturbed using the kernel 𝐱_t∼ q(𝐱_t|𝐱)=𝒩(𝐱_t;α_t𝐱,σ_t^2𝐈), with time-dependent parameters α and σ determining the strength of the perturbation to be applied. In the variance-preserving setting of diffusion processes, σ_t^2 = 1 - α_t^2. For the time-dependence, a cosine schedule is used such that α_t = cos(0.5π t).
The loss function to be minimized is implemented using a velocity parameterization:
ℒ_θ = 𝔼_ϵ,t𝐯_t - 𝐯̂_t,θ^2,
where the time-dependent network output with trainable parameters θ, 𝐯̂_t,θ, is compared with the velocity of the perturbed data at time t, 𝐯_t ≡α_tϵ-σ_t𝐱, with ϵ∼𝒩(0,𝐈). The score function is then identified as
∇_xlogp̂_θ(𝐱_t) = -𝐱_t - α_t/σ_t𝐯̂_t,θ(𝐱_t).
The data generation from the trained diffusion models is implemented using the DDIM sampler proposed in Ref. <cit.> that can be interpreted as an integration rule <cit.> with update rule specified by:
𝐱_s = α_s𝐱̂_θ(𝐱_t) + σ_s𝐱_t -α_t𝐱̂_θ(𝐱_t)/σ_t.
For a fair comparison, all diffusion models are trained using the same score-matching strategy and fixed number of 512 time steps during sampling.
The fast point cloud diffusion model (FPCD) follows <cit.>, where a permutation equivariant estimation of the score function is obtained by the combination of a DeepSets <cit.> architecture with attention layers <cit.>. During the point cloud simulation, two models are also defined: one that learns the number of non-empty cells, conditioned on the initial energy of the incoming particle, and one model that learns the score function of the normalized point cloud, also conditioned on the energy of the particle to be simulated and the number of hits to be generated. This model is trained on Dataset 1, described in Sec. <ref>.
The model trained on the image dataset (CaloScore) is adapted from <cit.> with a few modifications. Compared to the original implementation, the calorimeter simulation task is now broken down in two diffusion models: one that learns only the energy deposits in each layer of the calorimeter, conditioned on the initial energy of the particle to be simulated, and one model that learns to generate normalized voxels per layer, conditioned on the energy deposition in each layer and the initial energy of the particle to be simulated. Additionally, the original U-Net <cit.> model is combined with attention layers. These changes increase the model expressiveness and the generation fidelity. This model is trained on Dataset 2, described in Sec. <ref>
§ DETECTOR AND DATA DESCRIPTIONS
§.§ Calorimeter Simulation
The DD4HEP framework <cit.> is used to run Geant simulations of a high-granularity iron-scintillator calorimeter (based on the CALICE-style design <cit.>), which has dimensions similar to those of the forward hadronic calorimeter in the future ePIC detector (LFHCAL <cit.>) at the EIC. Specifically, the sampling structure comprises 0.3 cm scintillator tiles sandwiched between 2.0 cm thick steel plates. It consists of a total of 55 layers. The transverse area of the scintillator is set to 10 cm×10 cm, somewhat larger than in Ref. <cit.>. It adopts a non-projective geometry with tower elements arranged in parallel to the z axis and has its front face at z=3.8 m.
1.7 million events of single π^+ particles incident on the center of the calorimeter are simulated. The incident momentum, P_Gen., was generated uniformly in log_10 space in the range 1.0 < P_Gen. < 125 GeV/c. In order to hit the center of the calorimeter, the pions were generated with a polar angle of θ_Gen. = 17^∘. Because the detector is symmetric about ϕ, the particles are generated in the range 0^∘ < ϕ_Gen. < 360^∘.
An energy threshold corresponding to 0.3 MeV are used to select hits for further analysis.
§.§ Datasets
Dataset 1 is the point cloud representation of the Geant showers, while Dataset 2 represents the same showers using the image representation. Both Dataset 1 and Dataset 2 used in training share the same parent Geant simulation, such that the fast point cloud diffusion model and the image model are trained on different representations of the same set of calorimeter showers.
Dataset 1 is created by taking the Geant simulation and converting it to a format based on JetNet data <cit.>, that stores information on jets and their constituents in a zero-suppressed point cloud representation. The Geant data is stored in files containing two datasets, clusters and cells. The cluster dataset contains the P_Gen of the incident pion, as well as the number of hits in the calorimeter. The cell data is comprised of a constant number of 200 cells per event. Empty cells, or cells with deposited energy below the threshold are masked, with all values set to 0.0, and ignored during training.
The x, y, and z distributions of the Geant simulation are initially discrete, resulting from the digitization step of the simulation, with values equal to the centers of the cells in each dimension. The point cloud model struggles to learn extremely sharp features, as the score function is not well-defined for discrete inputs. To circumvent this, a uniform smearing within a cell-width is applied to the cells along each dimension to obtain continuous distributions for the final point cloud dataset. This maintains the same distributions at histogram-level when binning according to the cell-width, but yields a point cloud dataset with smooth x, y, and z distributions. Without this smearing, the distributions in x, y, and z resemble a series of delta functions that the point cloud model struggles. The point cloud model is trained on this smeared point cloud representation of the Geant simulation.
Dataset 2 is created by converting the point cloud dataset into an image format. Images at the original granularity would would be too large for the generative model. The calorimeter cells were therefore clustered into groups of 5 along each axis of the detector to create voxels, where 5×5×5 cells = 1 voxel. Energy in each of the cells making up the voxel were summed and assignd to the final voxel's total energy. The final image format consists of 11 11× 11 voxels. A hit in the voxelized dataset, and referenced in Section <ref>, is defined as any voxel with energy deposition above threshold.
For the final comparison, generated samples from the point cloud model are voxelized using the same method for Dataset 2. All comparisons are in this image format, at the same resolution of 11 × 11 × 11 voxels per image.
Images representing the full resolution of the calorimeter with 55×55×55 voxels were not used, as this would result in unmanageably large datasets (see Table <ref>), and would represent the largest calorimeter image training ever done. The point cloud model was trained on the full resolution because point clouds naturally represent the calorimeter at full granularity. Training the point cloud model on this more natural representation is in line with the goal of this work to investigate advantages/disadvantages of two representations of the calorimeter data. It is also for this reason that the generated point cloud distributions are shown separately, while the direct comparisons between models are done in the image representation. Investigating possible advantages of a point-cloud model trained directly on the voxelized dataset is left to future work.
§ RESULTS
All generated samples along with Geant are converted to the same image format at the same resolution of 11× 11× 11 voxels per event for fair comparison. A variety of distributions are used to evaluate the quality of the generated images. After comparing calorimeter images generated by both models, the point cloud representation of Geant is compared to the generated samples of the point-cloud model to provide additional insight to the previous image-based comparison. For all comparisons, the Earth mover's distance (EMD) <cit.>, also known as the 1-Wasserstein distance <cit.>, between generated distributions and Geant) distributions is calculated.
The EMD score a distance-like measure of the dissimilarity between two distributions. It roughly represents the minimum amount of work needed to transform one distribution to another. While this is not the only possible metric, it is a standard and widely-used statistic that was also the main distance deployed in <cit.>, where an image based model was compared to a Wasserstein-GAN. All EMD scores in Figures <ref>, <ref> and <ref> are calculated on final voxelized distributions
Figure <ref> shows a qualitative assessment of the generative models using the 2-dimensional distribution of the average energy deposition in three layers. All voxels with an expected energy deposition above 0 are populated in both the image and point cloud based models, with very few additional hits. The calorimeter shower will have diverse shapes, as well as different overall distribution of voxels due to the variation of ϕ_Gen.. The qualitative similarities in each image in Fig <ref> indicate that models reproduce the various showers from the training dataset well. Each image contains a ring due to θ_Gen. being fixed while varying ϕ_Gen..
Table <ref> shows the model size, size of each dataset, and time to generate 100k calorimeter showers. The disk size and sample time under the point cloud model are for showers in the point cloud representation. The AUC is obtained from a classifier trained to distinguish the samples of both models only in the voxelized image format. Both models have very good AUC, reasonably close to 0.5, with the image model having the lower AUC. The point cloud model is smaller by a factor of 4 compared to the image based model, and samples events 3 times faster. Lastly, the point cloud dataset requires over 100 times less disk space than the image format at full granularity.
Figure <ref> compares the total energy deposited in the calorimeter and total number of calorimeter hits, where a hit is defined as any voxel with energy above threshold. The EMD is also calculated between Geant and the different generative models.
Both the image-based diffusion model and the point-cloud based diffusion model are in good agreement with Geant at small deposited energies, deviating no more than 10%. At the highest deposited energies, however, both diffusion models begin to fall away from Geant, with the point-cloud model generating less energy, and the image based model generating slightly more energy than Geant. These trends begin at about 10 GeV, with the point-cloud model deviating slightly earlier. The point-cloud model also shows a slightly higher EMD score than the image based model.
The region where the deviations are largest, past 20 GeV of deposited energy are rare, and statistical fluctuations begin to dominate the Geant distributions.
The number of hits shows a similar trend, though with larger deviations. At a small number of hits, both show good agreement with Geant, with deviations slightly above 10%. At 15 or more hits, both models begin to deviate well past 10%, with the point cloud model oversampling the number of hits, and the image based model generating less hits than Geant.
Figure <ref> and <ref> shows the average deposited energy x, y, and z-coordinates. Both models struggle in the first and last layers in x and y coordinates, but show good agreement in the middle layers. While the image-based model shows larger deviations in the first and last layers of the calorimeter compared to the point-cloud model, it has an overall lower EMD in both distributions. The two-pronged feature of these distributions is a result of generating the pions at a fix polar angle and varying ϕ. It should be noted that there are little to no hits in the first and last x and y layers of the calorimeter, so even a very small deviation from Geant will result in a large deviation percentage (bottom panels of Fig. <ref> and <ref>). Similarly, as there are fewer hits towards the back of the detector, deviations increase slightly for the very last layers. However, The z-distributions show both models in very good agreement with the original Geant predictions, a possible effect of the z-distribution of hits being less dependant on the generated θ and ϕ ranges.
All three distributions show the point cloud samples are systematically lower than the original Geant distributions. This indicates the point cloud model would benefit from learning the energy per layer directly, as is done in the image model described Sec. <ref>. This difference likely explains why this small bias is observed in the point cloud model, but not in the image model, and is an avenue for improving the point cloud.
Following <cit.>, a classifier was trained to distinguish between generated showers and Geant showers. The classifier is comprised of two fully connected layers of size 256 using the RELU activation function. The classifier is trained only on vectors of voxelized images of each dataset. The area under the receiver-operator curve (AUC) for the image model was 0.673. The AUC for the point-cloud model was 0.726. Generally, being closer to 0.5, where the classifier is maximally confused, is the target. However the AUC obtained by both models is very promising, as having an AUC even slightly below 1.0 is non-trivial.
A key advantage of the point cloud model is that the distributions at the sub-voxel level can be shown. The point cloud model already simulates the data at the original granularity of the calorimeter, and voxelization is only necessary for the image representation. The original output of the point cloud model is compared to the continuous (or smeared) Geant distributions.
Figure <ref> shows the number of hits in the point cloud representation of the calorimeter showers. In the point-cloud representation, a hit is defined as any cell that has a energy deposited above threshold.
The point-cloud model reproduces the total number of cell hits well, much better than the voxel hit distribution, shown in Fig. <ref>. This may indicate that while the point cloud model is overall similar to Geant in both representations, small deviations in point cloud distributions can be summed into larger deviations during the voxelization process, where 125 individual cells are combined into a single voxel. However, there is a large symmetry group under which mismodelings in the bigger space may not affect the modeling in the coarser space, so further investigation is needed. However, the very good agreement with Geant in the number of cell hits and degrading agreement in the number of voxel hits indicates that the first diffusion model of the point cloud model architecture is performing well, while the second model, responsible for sampling the cell distributions, would likely benefit from additional tuning.
Similar conclusions can be derived from Fig. <ref>, show the generated point samples at the full detector granularity and in good agreement with Geant. Fig. <ref> shows the average x, y, and z coordinate distributions, as well as the cell log_10E distribution in the point representation. Again, there are larger relative deviations in the first and last layers in x, y, and z, coordinates where there are very few hits, just as in the image representation. However, there is very good agreement with the Geant simulation in layers containing a reasonable number of hits.
§ CONCLUSION AND OUTLOOK
In this paper, we make the first direct comparison between two score based generative models using either images or point clouds as representations of the same training data. We use Geant calorimeter simulations of a high-granularity hadronic calorimeter. Both models perform well for most distributions, with very similar AUCs, but the image-based diffusion model invariably has a lower EMD in each comparison to Geant.
Overall, the performance of the point-cloud diffusion model is very close to the image model. This is despite the point cloud model being disadvantaged in this work in a few important ways.
First, the calorimeter showers from the FPCD model are closest to Geant in the point cloud representation at the full calorimeter granularity, as shown in Fig. <ref> and <ref>. But it is later voxelized for comparison. This may compound mismodeling during the voxelization, however further investigation is needed.
Second, the point cloud model is adapted from a model architecture initially designed for jet data from the JetNet datasets. While the high-level structure of the datasets are very similar, the data itself are quite different. For example, the first diffusion model making up the point cloud model was initially much larger, as predicting the jet multiplicity is in general a more difficult problem than the number of non-empty cells in a calorimeter shower. Reducing the size of the first diffusion model of the point cloud model architecture had no impact on performance while speeding up training. The second diffusion model making up the point cloud model architecture that is responsible for sampling the cell x, y, z, and E was directly adapted from <cit.>. Further tuning of the point cloud model, particularly the cell-model can likely close the small remaining gap in performance. The image model, in contrast, is based on CaloScore, which was tuned specifically for calorimeter showers.
Lastly, the image-based model uses the energy deposition in each layer in addition to the generated particle momentum to condition the second diffusion model making up its architecture. The second diffusion model making up the point cloud model is solely conditioned on the generated particle momentum. This might explain why the point cloud model has systematically lower mean energy distributions (see Fig. <ref> and <ref>) compared to both Geant and the image based model.
These potential sources of improvement in the point cloud model should not detract from it's already very reasonable performance, deviating from Geant more 10% only in the sparsest of layers, where the image based model also struggles. At the same time, the point cloud model offers several advantages over the image model.
First, the sheer size of the data. The point cloud data saved to HDF5 files is a factor of 100 times smaller using the same zlib compression as the image based dataset at full granularity, with no voxelization. As calorimeters continue to increase in granularity, this difference will only increase.
Second, information is lost during voxelization process; cell hits with the same x, y, z coordinates, but different energies are summed over in the image representation. This is true even if images are produced at the full granularity of the calorimeter, where hits within the single cells are summed over. This means that voxelized datasets cannot naturally be reverted back to a point cloud representation.
Additionally, as was showed in this work, the generated point clouds can be voxelized afterwards, or converted into other representations that better fit specific use cases.
This work establishes a benchmark for future research on generative models, offering valuable insights into the challenges of modeling hadronic showers in highly granular calorimeters using image-based techniques, while also exploring the potential of point-cloud methods. The current advantages of point clouds, in combination with improvements to close the remaining performance gap described earlier, will likely make point cloud based models a clear choice for highly granular calorimeters. This work should serve as a reference for studies utilizing future calorimeters based on the CALICE design, including those intended for use in CMS at the LHC and ePIC at the EIC.
§ CODE AVAILABILITY
The code used to produce the point cloud results shown in this document are available at <https://github.com/ftoralesacosta/GSGM_for_EIC_Calo>. The code for the image based model and comparisons of images is available at <https://github.com/ViniciusMikuni/Calo4EIC>. Example Geant4 datasets and generated samples are available at <https://zenodo.org/record/8128598>.
§ ACKNOWLEDGMENTS
We acknowledge support from DOE grant award number DE-SC0022355.This research used resources from the LLNL institutional Computing Grand Challenge program and the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 using NERSC award HEP- ERCAP0021099. M.A acknowledges support through DOE Contract No. DE-AC05-06OR23177 under which Jefferson Science Associates, LLC operates the Thomas Jefferson National Accelerator Facility. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.
unsrt
|
http://arxiv.org/abs/2307.04011v1 | 20230708164347 | Robust Learning-Based Incipient Slip Detection using the PapillArray Optical Tactile Sensor for Improved Robotic Gripping | [
"Qiang Wang",
"Pablo Martinez Ulloa",
"Robert Burke",
"David Cordova Bulens",
"Stephen J. Redmond"
] | cs.RO | [
"cs.RO",
"cs.LG"
] |
[
Kipton Barros
August 12, 2023
===================
empty
empty
The ability to detect slip, particularly incipient slip, enables robotic systems to take corrective measures to prevent a grasped object from being dropped. Therefore, slip detection can enhance the overall security of robotic gripping. However, accurately detecting incipient slip remains a significant challenge. In this paper, we propose a novel learning-based approach to detect incipient slip using the PapillArray (Contactile, Australia) tactile sensor. The resulting model is highly effective in identifying patterns associated with incipient slip, achieving a detection success rate of 95.6% when tested with an offline dataset. Furthermore, we introduce several data augmentation methods to enhance the robustness of our model. When transferring the trained model to a robotic gripping environment distinct from where the training data was collected, our model maintained robust performance, with a success rate of 96.8%, providing timely feedback for stabilizing several practical gripping tasks. Our project website: <https://sites.google.com/view/incipient-slip-detection>.
§ INTRODUCTION
§.§ Background
Autonomous robots have yet to achieve human-like dexterity when performing gripping tasks, mainly due to a lack of satisfactory tactile perception and processing abilities. Studies have shown that even humans struggle with simple gripping tasks in the absence of tactile sensation <cit.>. The palm of the human hand contains ∼17,000 mechanoreceptors, i.e., specialized nerve endings that respond to mechanical stimuli such as deformation, pressure, and displacement <cit.>. These receptors play a crucial role in sensing and relaying tactile information to the nervous system <cit.>, allowing humans to adjust their grip in real-time to account for slipperiness and other factors. Building on these insights, researchers have designed tactile sensors replicating part of human hand sensing capabilities and explored slip detection techniques using these sensors to enhance robotic manipulation performance <cit.>.
§.§.§ Types of slip
The two main types of slip are gross slip and incipient slip. Gross slip refers to the occurrence of slip across the entire contact surface, where the relative motion between the gripper or tactile sensor and the gripped object is typically observable at a macro level <cit.>. On the other hand, incipient slip refers to the initial stage of slip, when parts of the contact surface slip while others remain stuck <cit.>. For example, when an object is held by elastic fingertips, and an external force is applied to the object in a direction tangential to the contact surface, some parts of the fingertips will stretch while others will compress, causing incipient slip at the periphery of the contact surface while the central part remains stuck. As the applied force increases, the slip will finally spread across the entire contact surface, leading to gross slip. Throughout the incipient slip phase, there may not be any observable relative motion between the object and the finger.
§.§.§ Slip detection and challenges
Previous studies have proposed techniques to detect gross slip and apply corrective measures when the slip is detected to prevent objects from dropping out of the grasp <cit.>. Detecting gross slip may not always be a wise strategy, as it occurs when the entire contact has already started slipping. On the other hand, detecting incipient slip can provide an early warning of an impending and more dangerous gross slip, allowing corrective measures to be applied earlier, and increasing the likelihood of maintaining a safe grip. However, detecting incipient slip is not trivial because it requires the contact interface of the sensor to possess adequate elasticity, enabling one part to undergo sufficient and detectable deformation, resulting in slip, while the other part remains stuck. Furthermore, validating incipient slip can be challenging since it is not generally associated with macro-level relative movement between the sensor/finger and the object. To verify the occurrence of incipient slip, researchers commonly utilize a camera to monitor the contact surface; by examining the camera images, they can visually confirm the presence of incipient slip events <cit.>. However, this method of relying on cameras may not be feasible in real-world situations, such as when gripping everyday objects.
§.§ Our contribution
Our study presents a new technique for detecting incipient slip using the PapillArray (Contactile, Australia) tactile sensor. This sensor features a square array of nine elastic silicone pillars with varying unloaded heights, promoting different normal forces on the pillars when pressed against a surface. This design enhances the likelihood of inducing incipient slip on shorter pillars when a tangential force is applied.
We utilized deep neural networks (NN) to develop our incipient slip detection algorithm, where we made novel use of the data gathered in a previous study <cit.> to construct the dataset for training and evaluating the NN. The primary objective of the NN was to classify inputs into two distinct categories: incipient slip and other, functioning as a binary classifier; other refers to all others states that are not incipient slip, such as gross slip or being stationary. Furthermore, the tactile data at hand is presented in the form of a uniformly-sampled time series. Therefore, to effectively capture the serial nature of the data, we utilize a recurrent neural network (RNN) <cit.>. The inclusion of historical data in a NN model has the potential to enhance its performance in real-time prediction tasks, as it enables the capture of temporal patterns and dependencies, leading to more robust and accurate forecasts <cit.>. We also propose several data augmentation methods designed to enhance the performance and robustness of our trained model, making it robust to environmental confounders.
§ RELATED WORK
Similar to the approach we will take in this paper, the approach proposed in <cit.> treats slip detection as a classification task; the authors employed a support vector machine <cit.> to detect slip using the velocity of embedded pins on the inner surface of a TacTip camera-based tactile sensor <cit.>. Labels of the training data are assigned manually based on the alignment of pin velocities. In a more recent study <cit.>, the authors modified the TacTip sensor used in <cit.> by introducing raised fingerprint-like ridges, decreasing skin thickness, and increasing pin spacing to reduce mechanical coupling between ridges and to create the traction differential and facilitating the shear displacement required for the occurrence of incipient slip. This is similar to the behavior seen on the human finger pad when sheared against an object, thus allowing the sensor to experience incipient slip. They used an external camera to monitor the contact in real-time for data labeling, and then employed a convolutional neural network <cit.> as a binary classifier to detect incipient slip.
The GelSight technology is another camera-based tactile sensing system that uses an elastic body to establish a contact with an object, with the built-in camera recording the resulting deformation to obtain tactile data <cit.>. An approach was introduced in <cit.> for detecting incipient slip using the GelSight sensor. This method determines the degree of incipient slip by analyzing the inhomogeneity of the displacement field, which is quantified in terms of entropy. More recently, a more advanced version of the GelSight technology, called GelSlim, was proposed in <cit.>; it employed the deviation of the deformation field from a 2D planar rigid displacement field to determine slip.
Compared to camera-based tactile sensors, the distributed optical sensor used in our work, the PapillArray, is less complex in terms of instrumentation<cit.>. It offers several advantages over other sensor designs, including size, temporal resolution, and compliance. A heuristic algorithm that employs the PapillArray tactile sensor to detect incipient slip is proposed in <cit.>. The approach is based on the observation that incipient slip happens when some sensor pillars stop deflecting at the same rate as the contacted object is moving in the sensor's frame of reference. Precisely, this approach detects slip by evaluating the tangential velocity drop with respect to a reference pillar, which is the pillar under the highest normal force (usually the center). In the case of rotational movements, with the center of rotation at the center pillar, the algorithm cannot detect any slip since no movement can be detected in the center pillar. This heuristic approach is further improved in <cit.> to account for rotational slips, detecting the deceleration of each pillar by comparing it to its own recent maximum velocity, and then it checks if other pillars are still in motion to confirm that the deceleration indicates an incipient slip. However, these methods may not be applicable when dealing with deformable or non-planar surfaces, or when only a subset of the pillars makes contact with the object. In such cases, establishing a dependable reference pillar to represent the object's movement in <cit.> becomes challenging; in <cit.>, it is difficult to determine whether the deceleration of pillars is caused by slip or by the shape of the object's surface.
In our work, we are motivated to take a learning-based approach in developing a dedicated incipient slip detection algorithm, where we propose domain adaptation techniques to enhance the robustness of our trained model, enabling it to effectively detect incipient slip for more realistic objects and contacts, overcoming the challenges outlined above.
§ MATERIALS AND METHODS
§.§ Hardware
§.§.§ Contactile sensor
Our study employed the commercial PapillArray sensor from Contactile[<https://contactile.com/>], depicted in Fig. <ref>, which is based on the concept described in <cit.>. The sensor outputs the real-time x-y-z force data experienced by each pillar at a high sampling rate of 1,000 Hz. Our training data was collected using the Dev Kit v1, while for the online evaluation of our trained model, we used the Dev Kit v2. Dev Kit v2 and Dev Kit v1 differ in size and the pillar Shore hardness.
§.§.§ Robotic gripping rig
Fig. <ref> displays the rig used in our study for the gripping task. The rig features a specialized two-finger gripper (RG2, OnRobot, Germany) with a blue adapter fixed to one of its fingers. This adapter serves to couple the Contactile PapillArray Dev Kit v2 sensor to the gripper finger. A white 3D-printed cuboid is used to extend another finger, matching the length of the finger equipped with the sensor. Moreover, a couple of ArUco markers are attached to this extended cuboid to track the gripper's pose. We replaced the original motor of the RG2 gripper with a stepper motor (MX-28, Dynamixel, US) to achieve high-frequency interruptible control of the gripper. The modified gripper was mounted on a six-axis robot arm (UR5e, Universal Robots, Denmark).
§.§ Data preparation
§.§.§ Collect slip data and annotate slip events for individual pillars
Our training dataset is sourced from <cit.>. In brief, the training data was acquired using a six-degree-of-freedom hexapod robot (H-820, Physik Instrumente, Germany) with the Contactile PapillArray Dev Kit v1 sensor mounted on the top. A transplant acrylic plate is fixed above the sensor on a T-slot frame and a video camera (Logitech Streamcam, Logitech, Switzerland) is positioned above the acrylic plate to capture videos of the contact between the sensor and the plate. During the data collection, the hexapod pushes the sensor vertically against the acrylic place and then moves it laterally to induce a slip. The horizontal movement could be a translation, a rotation, or a combination of both. A total of 200 data sequences were collected, covering a range of compression levels, hexapod movement velocities, and movement directions. The recorded videos are processed using the Matlab Computer Vision Toolbox (MathWorks, USA) to track the pillar tip position. The tangential pillar tip velocity is then used to label the slip state (gross slip or not gross slip) of individual pillars.
§.§.§ Collect control data
When the sensor is compressed against a flat surface and moved laterally, the tangential velocity measured by each pillar will increase at first, as the sensor starts deforming, before reaching a peak velocity and subsequently decreasing its speed when the pillar stops deforming (Fig. <ref>). If a pillar stops deforming because it is undergoing incipient slip, at least one other pillar will still be deforming laterally; this is observed by an asynchronous decrease of the tangential velocity of the nine pillars (Fig. <ref> - Slip). However, if the object stops moving before any slip occurs, the tangential velocity magnitude of the nine pillars decreases almost simultaneously (Fig. <ref> - Stop).
Since the stop events display similar temporal feature to slip events, we collected an additional dataset specifically focusing on stop events, consisting of a total of 28 data sequences. We label the data points in these sequences as other. By incorporating this dataset, the NN is less likely to misclassify between incipient slip and other, thereby improving the accuracy and reliability of the NN. The data collection process was similar to that of the slip events, except that the hexapod's movement was abruptly halted before any slip occurred. Further details on this process can be found in <cit.>.
§.§.§ Annotate the incipient slip
Based on the definition of incipient slip provided in Section <ref>, we annotate the incipient slip in the dataset as follows: we consider that incipient slip has occurred when at least one pillar slips with respect to the contact surface, while at least one other pillar remains stationary with respect to the contact surface. In other words, we start annotating incipient slip from the moment the first slip occurs on any pillar, and this interval continues until the time when all nine pillars have slipped. The slip label of each pillar is obtained as described in Section <ref>. It should be noted that when annotating incipient slip in the rotational data, we only consider the outer eight pillars. This is because the rotational movement is centered around the central pillar, which never slips by our definition (remains in the same location on the contact area), for our data set.
§.§.§ Refine data sequence
The sensor output exhibits variance due to noise and sporadically produces glitches that deviate significantly from the mean value, displaying sudden extreme highs or lows. To address these issues, we apply a median filter with a window size of 21 samples on the raw sensor signal, which is sampled at 1,000 Hz.
We divided the raw data sequence into non-overlapping windows, with each window containing 40 samples. This division reduced the data rate to 25 Hz. This was done for practical limitations in the hardware and software of our system. More precisely, the maximum refresh rate of our gripper servo is ∼62 Hz, and the computation rate of our classifier is ∼40 Hz. Moreover, it is worth noting that reliable gripping does not necessarily require a high sampling frequency. Indeed, humans have a reaction time of approximately 80-120 ms (equivalent to 8.3-12.5 Hz) <cit.>, enabling us to perform most everyday gripping tasks effectively.
Finally, we only consider the x-y forces on the pillars as input in NN training, while excluding the z force. During the data collection process, when the hexapod moves tangentially to induce slip, it remains stationary in the z direction. As a result, we assume that the z force does not play a significant role in detecting incipient slip in our case. It should be acknowledged that in real-world scenarios, the normal force can provide valuable information for humans to detect slip, and it is likely to vary appreciably for different gripping objectives. Therefore, another reason for excluding the z force is to prevent the NN from incorrectly learning that the z force remains relatively stable during slip events, as occurs in our data set.
§.§ Training data augmentation
§.§.§ Data augmentation by rotational symmetry
During the data collection process, the sensor is placed at the origin of the world coordinate frame. Its horizontal surface is parallel to the x-y plane of the world frame of reference, and the side edges align with the x-y axis directions. Hence we use a rotation transformation to augment the data; intuitively, it can be understood as rotating the initial position of the sensor around the z axis by a random angle. For each data point in a sequence, we perform the following mathematical calculations:
[ F_x'; F_y' ] = [ cos(θ) -sin(θ); sin(θ) cos(θ) ]·[ F_x; F_y ], θ∈[0,2π),
where F_x and F_y represent the force values along the original x-y axis, and F_x' and F_y' are the augmented force values after virtual rotation of the sensor by a randomly sampled angle, θ, from a uniform distribution of [0, 2π).
§.§.§ Advanced data augmentation for domain adaptation
The data used in our study was collected under idealized conditions, where a hexapod robot was used to compress the sensor against a flat surface and move laterally in a controled manner. In this setup, the force was nearly perpendicular or parallel to the contact surface and the movement speed is nearly constant. However, in real-world robotic gripping, the conditions are expected to be quite different from this idealized setup, and the performance of the model trained on such data is expected to be poor. We identify several issues that may arise when transferring the model trained on idealized data to real-world gripping scenarios, and we propose a range of advanced data augmentation methods to address these issues in the following paragraphs. These methods are designed to generate synthetic data that mimics the real-world variability of the gripping:
* Issue: The slipping velocity in real-world robotic gripping is not constant, as it is influenced by various factors such as gravity, friction, and the shape of the object being gripped. However, during the data collection process, the hexapod induces slip at a constant velocity. Remedy: We employ random sampling to sample a percentage of data points from the raw data sequence, thereby generating a new data sequence. And we maintain the frequency of the new sequence at the same rate as the raw sequence (1,000 Hz). This approach can simulate velocity variations to mimic real-world gripping scenarios, as it changes the magnitude differences of some temporally adjacent data points while keeping the time interval unchanged.
* Issue: In some gripping scenarios, a portion of the sensor pillars may not be in contact with the object. For instance, this can occur when employing sensors to grip an object with a rounded surface or when gripping an object smaller than the sensor's contact area. Remedy: To simulate an unloaded pillar, we substitute a number of pillar data sequences with zero sequences. Noise is then added to make the generated sequence resemble a realistic sensor signal. The noise is derived from a normal distribution with a mean of 0.0 N and a standard deviation of 0.001 N.
* Issue: Unlike with the hexapod, the force generated by the gripper may not be perfectly perpendicular to the x-y plane of the sensor frame of reference, and the force leading to slip may not be perfectly in this plane. This can occur when the gripped object is not flat or the mechanical linkage of the gripper flexes when applying force to the object. Remedy: First, we sampled nine individual pillar sequences from raw sensor sequences with different sensor compression levels and hexapod movement types, and then combined them to form a new sensor sequence. Secondly, we scaled (scale factor ranging from 0.2 to 2.0) the magnitude of values for a number of pillar sequences. Lastly, we randomly permuted the position (by pillar index) of a nine-pillar sequence. Employing these techniques can encourage the NN capture a broader and more comprehensive pattern of incipient slip (see Section <ref>), rather than only learning the limited pattern introduced by the hexapod.
§.§ Neural networks
The key decision making component of our incipient slip detection approach is a binary classifier. Initially, we trained a NN capable of estimating the probability of incipient slip for each time point in a sequence. Next, we set a threshold to convert the continuous probability into a binary output. To enhance the accuracy of the classifier, we used an ensemble technique that trains multiple independent classifiers concurrently and aggregates their output probabilities to produce the final decision (shown in Fig. <ref>).
§.§.§ Architecture
Fig. <ref> illustrates the process of inputting a data sequence into the NN and obtaining the corresponding slip classification. The modified data sequence, as explained in Section <ref>, is input into an encoder. Subsequently, the encoder output is passed to a specific type of RNN called a gated recurrent unit (GRU) <cit.>. In our approach, we utilize a single layer of GRU for each propagation step, and we refer to it as a GRU cell. The hidden output from the GRU cell is generated as a combination of the current input and historical information. Moreover, an estimator is included that takes the hidden layer output from the GRU cell and converts it into a probability estimation. The ground truth label of each window is determined by the label of the last sample in the window.
§.§.§ Training
The ensemble model consists of Z (Z=5 in our case) independently trained classifier models. During each training iteration of each classifier model, a subset comprising a proportion of λ sequences (λ=40% in our case) is randomly sampled with replacement from the entire training set and used for NN training. The final layer of the estimator utilizes a two-class softmax activation function, with its outputs interpreted as probabilities for the occurrence of incipient slip and other. Our chosen loss function is binary cross-entropy.
§.§.§ Decision making
We aggregate the output probability from each classifier model in the ensemble to convert the continuous probability to binary prediction:
f:=1[∑_z=1^ZM_z(x=[F_(n-1)T+1,···,F_nT])/Z >P_th],
where 1[·] is an indicator function, M_z donates the z^th classifier model in the ensemble, x donates the input vector, and P_th denotes the probability threshold, which is 50% in our work. Z donates the number of classifiers in the ensemble model.
§ EXPERIMENTS AND RESULTS
We first explicitly display our method's high success rate in detecting incipient slip, including offline and online scenarios. Then, we illustrate the practical benefits of our approach by showcasing its ability to stabilize an insecure robotic grasp in a number of practical gripping tasks.
§.§ Offline evaluation
The entire dataset is randomly split into two subsets: a training set (∼80% of the entire dataset, comprising 160 data sequences of slip event and 23 data sequences of stop event) for model training, and a test set (∼20% of the entire dataset, consisting of 40 data sequences of slip event and 5 data sequences of stop event) for model evaluation. Both subsets are expanded through the symmetry-based augmentation method described in Section <ref>, resulting in a five-fold increase in the size of the training set and test set.
Fig. <ref> displays two examples comparing the incipient slip detection results over slip and stop events. As observed, the algorithm's confidence in labeling incipient slip increases rapidly as incipient slip starts and decreases as it progresses toward gross slip. In comparison, the probability in the stop case fluctuates slightly but remains well below the threshold.
We define an incipient slip detection as successful if it occurs within a 0.3 second window preceding the true labeled time point of incipient slip (to accommodate the error of the ground truth) and prior to the occurrence of the gross slip. For the stop event, a successful estimation is defined as a classification of the entire sequence as other.
Fig. <ref> presents the confusion matrix, displaying the final classification results over the entire test set; our algorithm achieves an overall success rate of ∼95.6%. The results also demonstrate its effectiveness in differentiating between the slip and stop events; this indicates that our algorithm is not simply detecting the changes in the force and yank of the pillars, as mentioned earlier in Section <ref>.
Our algorithm can effectively detect incipient slip in its early stages.
In Fig. <ref>, we present the latency between the moment of incipient slip detected by the algorithm and the ground truth onset of incipient slip. It is evident that, on average, incipient slip can be detected within 10 ms of its initiation.
§.§ Online evaluation
In the online evaluation stage, we utilized the full data set for training the final deployed model. Again, to increase the amount of training data, we applied both symmetry-based (see Section <ref>) and advanced data augmentation (see Section <ref>) techniques, resulting in a five-fold increase in data amount (1140 data sequences).
The online evaluation was performed on six everyday objects, depicted in Fig. <ref>. We include objects of varying surface materials, curvatures, and hardness to ensure a broad range of conditions are represented in our results.
§.§.§ Validating incipient slip detections
We cannot easily validate incipient slip occurrences for everyday objects as we cannot independently monitor individual pillar contacts. Hence, we choose to perform the online evaluation based on following well-founded assumptions. The incipient slip detection is considered successful if it can be detected at any time-point between the time when the robot's movement begins (T_m) and the time when gross slip occurs (T_g); the criterion for determining the occurrence of gross slip has been arbitrarily defined as the occurrence of relative translational movement greater than 2 mm or relative rotational movement exceeding 2^∘ between the object and the robot's frame of reference.
To induce a slip, the gripper first grips the object with a constant force. Then the robot moves the gripper downwards towards a rigid and stationary table surface, eliciting the slip between the sensor attached to the gripper tip and the object. In each trial, the gripping force is selected from a range of 8 N to 30 N. The robot movement can be either translational, rotational or a combination of translational and rotational. The velocity (v) and acceleration (a) of the robot movement have three different levels: low (v = 4 mm.s^-1, a = 10 mm.s^-2), medium (v = 10 mm.s^-1, a = 50 mm.s^-2), and high (v = 40 mm.s^-1, a = 100 mm.s^-2). All robot movements were performed using the built-in movel function of the UR script. The tool center position and orientation are obtained using the built-in getl function of the UR robot. This function employs forward kinematics calculations based on the read joint angles.
In accordance with the offline evaluation, control trails are also conducted here for each v and a combination and movement type. The purpose is to validate that the identified behavior is indeed the incipient slip, rather the event with similar pattern like the stop event we mentioned above. The control data involves lifting the robot arm while maintaining a secure grip using a pre-determined grip force that is sufficient to prevent any slippage. As a result, when lifting an object, the pillars in contact undergo downward deformation due to the force of gravity; subsequently, once the object is securely held by the gripper and remains relatively motionless, these pillars will remain stationary. Here, for the sake of convenient explanation, we will also refer to this event as stop, and we label the sequence as other. To ensure a fair experiment, we add extra weight to lightweight objects to enhance their downward motion when being lifted, aiming to make the pattern of the output data sequence more like a slip event. In total, our experiment consisted of 216 trials, including 162 sequences of slip event (6 objects × 3 movements × 3 forces × 3 velocity/acceleration combinations) and 54 sequences of stop event (6 objects × 3 movements × 1 force × 3 velocity/acceleration combinations).
Fig. <ref> illustrates the final validation results. Fig. <ref> shows a confusion matrix, highlighting the high success rate (∼96.8%) of our method in detecting incipient slip and its ability to differentiate between slip and stop events. Fig. <ref> demonstrates that our algorithm can detect incipient slip almost immediately upon the initiation of the movement that induces slip, with a normalized displacement D_norm range of 0.2 - 0.4, within which the incipient slip can be detected (refer to the caption for the definition of D_norm). These results provide comprehensive validation of the effectiveness of our approach in detecting incipient slipping in real-world gripping tasks.
§.§.§ Ablation study
This study aims to showcase the effectiveness of our advanced augmentation method in bridging the domain gap between the idealized data collected with the hexapod and more realistic data encountered with the robotic gripper. To accomplish this, we employed the model training approach described in Section <ref>. However, instead of splitting the data into separate train and test sets, we trained the model using the entire dataset here, given the different objective. Subsequently, we conducted online gripping experiments, as described in Section <ref>, using this trained model. Our findings, as illustrated in Fig. <ref>, indicate that the model trained without our advanced augmentation method exhibits a notably high false positive rate in the subsequent online gripping task when compared to the results shown in Fig. <ref> where the model was trained using our advanced augmentation method. In other words, the model trained without our advanced augmentation is unable to effectively distinguish patterns between slip and stop events. As a result, it incorrectly detects incipient slip in many stop events.
§.§ Grasp stabilization after incipient slip detection
This experiment aims to show the benefit of using our incipient slip detection method in practical gripping tasks. This involve lifting the robot arm while gripping the object with a pre-determined small force to ensure that slip occurs. We applied our incipient slip detection method and adjusted the grip when incipient slip was first detected to prevent the object from slipping further. In this experiment, we simulate two common scenarios that can trigger slips. The first involves gripping an object at its center of gravity with insufficient force and lifting it, causing a translational slip between the gripper and the object. The second involves gripping an object away from its center of gravity and lifting it, where rotational slip is likely to occur. We implemented a simple grip force adaptation that responds to incipient slip detection as follows: if incipient slip is detected, the robot immediately stops, and the gripper applies a pre-determined secure force to the object. The objects used in the experiment are the same as those shown in Fig. <ref>. The experiment was conducted 36 times (6 objects × 2 scenarios (translation or rotation) × 3 repetitions). We fix ArUco markers on the objects and gripper and use Python OpenCV to track the positions and orientations of all.
We report the results in Table <ref>, which demonstrate the quickly and effective detection of incipient slip using our algorithm. On average, our algorithm can timely detect incipient slip and prevent the object from slipping when the relative translation between the object and the gripper reaches 2.5 mm and the relative rotation reaches 1.9 ^∘. Our algorithm showcases its ability to facilitate timely corrective action, preventing object falls; a demonstration video can be seen at our project website given in the abstract.
§ DISCUSSION
Our developed algorithm enables the NN to effectively learn the incipient slip pattern from offline data and demonstrates high accuracy in both offline and online test sets. Furthermore, our algorithm enhances the security of robotic gripping.
Compared to previous related works <cit.>, our algorithm offers several advantages. Firstly, our incipient slip detection algorithm incorporates a data-driven learning-based approach, minimizing the need for extensive human involvement in investigating the complex patterns of incipient slip. Secondly, the improved robustness of our algorithm enables the NN to effectively adapt to diverse domains with various types of PapillArray sensors and robotic gripping systems, despite being trained solely on data lacking heterogeneity. Therefore, our algorithm is more practical and possesses greater potential for maximizing the utilization of valuable tactile data in real-world scenarios. Thirdly, our algorithm has the ability to distinguish between incipient slip and a closely related tactile pattern that we refer to as a stop event. Notably, previous related work <cit.> has not adequately considered or addressed the stop event; however, our investigation has revealed the importance of including stop events when developing incipient slip detection algorithms due to their similar patterns but entirely different consequences.
There are limitations to our work that need consideration. Firstly, the incipient slip detection could be improved by transitioning from a binary signal to a continuous warning signal. For instance, if incipient slip is detected in a small portion of the contact surface, the remaining area may still possess sufficient fraction to prevent significant slippage. In such cases, the warning level of incipient slip is low and corrective actions may not be necessary. Conversely, if a significant portion of the contact surface exhibits incipient slip, the warning level should escalate and it becomes important to for appropriate corrective actions. Moreover, our current choice of force adaptation method for reacting to incipient slip falls short when compared to the state-of-the-art gripping control work <cit.>. However, it is important to note that force adjustment is not the primary focus of our research in this paper, which is focused on improving the incipient slip detection. In future work, we will develop a more sophisticated force adaptation technique that incorporates our incipient slip detection method.
§ CONCLUSION
In conclusion, this paper presents an incipient slip detection method that employs deep learning and several data augmentation techniques to improve the robustness of the trained NN. Our method is highly effective and reaches the state-of-art performance, it enable a single pre-trained NN model to be applied across various domains and tasks. In addition, our method has the potential to be extended to other approaches that use compliant tactile sensors.
To train the NN parameters, we use stochastic gradient descent with a momentum of 0.95 and a learning rate of 10^-3, with a batch size of 512. We also incorporate a weight decay of 10^-3 using L_2 regularization during training. The encoder NN consists of one hidden layer with 1024 units, and the output dimension is 128. The GRU cell has a hidden layer dimension of 128. The predictor network comprises two hidden layers with 256 and 128 units, respectively. To all hidden layers, we apply rectified non-linearity <cit.> and batch normalization <cit.>.
We implement our NN using PyTorch (Version 1.12.1, Meta, USA). All our experiments are conducted on a PC with an Intel 7-10875H CPU and an NVIDIA 2060 GPU. During the online evaluation stage, e utilise ROS <cit.> to facilitate communication between various components in our system.
ieeetr
|
http://arxiv.org/abs/2307.03965v1 | 20230708123352 | Seismic Signatures of the $^{12}$C($α$, $γ$)$^{16}$O Reaction Rate in White Dwarf Models with Overshooting | [
"Morgan T. Chidester",
"F. X. Timmes",
"Ebraheem Farag"
] | astro-ph.SR | [
"astro-ph.SR"
] |
red
Signatures of ^12C(α, γ)^16O in WD models with overshooting
Chidester, Timmes, & Farag
0000-0002-5107-8639]Morgan T. Chidester
School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA
0000-0002-0474-159X]F.X. Timmes
School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA
0000-0002-5794-4286]Ebraheem Farag
School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA
Morgan T. Chidester
[email protected]
We consider the combined effects that overshooting and the reaction rate have on variable white dwarf stellar models. We find that carbon-oxygen white dwarf models continue to yield pulsation signatures of the current experimental reaction rate probability distribution function when overshooting is included in the evolution. These signatures hold because the resonating mantle region, encompassing ≃ 0.2 in a typical ≃ 0.6 white dwarf model, still undergoes radiative helium burning during the evolution to a white dwarf. Our specific models show two potential low-order adiabatic g-modes, g_2 and g_6, that signalize the reaction rate probability distribution function. Both g-mode signatures induce average relative period shifts of Δ P/P = 0.44 % and Δ P/P = 1.33% for g_2 and g_6 respectively. We find that g_6 is a trapped mode, and the g_2 period signature is inversely proportional to the reaction rate. The g_6 period signature generally separates the slower and faster reaction rates, and has a maximum relative period shift of Δ P/P = 3.45%. We conclude that low-order g-mode periods from carbon-oxygen white dwarfs may still serve as viable probes for the reaction rate probability distribution function when overshooting is included in the evolution.
§ INTRODUCTION
Helium burning is primarily the fusion of helium into carbon by the triple-alpha (3α) process.
All stars born with more than ≃ 0.5 go through this stage of energy production as they evolve beyond the main-sequence <cit.>.
Helium burning also plays a key role in transients such as
Type I X-ray bursts <cit.>,
Type Ia supernovae <cit.>, and
He-rich subdwarf O stars <cit.>.
Helium burning also impacts several classes of distribution functions,
such as the black hole mass distribution function <cit.>
including any mass gaps based on the pair-instability mechanism in the evolution of
massive stars <cit.>.
He burning is triggered by the 3α process releasing 7.5 MeV in fusion energy and producing ^12C <cit.>.
This is a unique process, setting stringent conditions for helium ignition.
The 3α process is followed by the α capture reaction ^12C(α, γ)^16O,
converting the ^12C into ^16O <cit.>.
These two isotopes are the principal products of He burning.
In addition, nearly all of a star's initial CNO abundances in the stellar interior are converted to ^22Ne at the onset of He burning <cit.>.
This marks the first time in a star's life where the core becomes neutron rich. We follow the convention that ^22Ne is the “metallicity” of a carbon-oxygen (CO) white dwarf (WD).
The interiors of CO WDs are, in principle, the best probe of the ashes of He burning.
A goal of WD seismology is to characterize the chemical profiles of principal products of He burning
<cit.>
and the chemical profile of the trace ^22Ne metallicity <cit.>.
Furthermore, regions within a CO WD model that burn helium radiatively during its prior evolution can offer potential constraints on the He burning nuclear reaction rates.
For example, <cit.> found that certain trapped adiabatic g-modes in WD models
may provide a pulsation signature that constrains the experimental reaction rate probability distribution function.
These signature g-modes were shown to resonate
with the region of the CO WD model that underwent radiative He burning during its previous evolution. The innermost boundary of this resonant cavity
corresponds to the molecular weight gradient at O→C chemical transition, and the outermost boundary to the molecular weight C→He chemical transition.
The resonating region encompasses ≃ 0.2 of a typical ≃ 0.6 WD model.
C22 cautioned that the chemical structure and resulting pulsation spectrum
is sensitive to
the width of the O→C transition <cit.>,
the experimental 3α reaction rate probability distribution functions <cit.>,
convective boundary mixing processes during core He depletion <cit.>, and
the number of thermal pulses during the Asymptotic Giant Branch (AGB) phase of evolution <cit.>.
Modeling convective boundary mixing processes at the convective-radiative interface during core He burning in low- and intermediate-mass stellar models is currently uncertain
<cit.>.
Convective overshoot occurs because the convective boundary is not the location where convective velocities are zero,
but the location where the buoyant acceleration of the fluid is zero.
An order–of-magnitude expression Δ x = u Δ t provides an estimate for how far convective motions overshoot <cit.>.
Here Δ x is the overshoot distance, u is the convective velocity, and
Δ t ≃ 1/N where N is the frequency
in the stable region. There is disagreement on how to calculate Δ x, but this estimate
broadly shows Δ x ≪ H_P in stellar environments, where H_P is the pressure scale height.
The exponential overshoot parameterization <cit.> is frequently implemented in 1D models to describe this convective boundary mixing process, treating Δ x as a free parameter.
The values of Δ x
needed to match the gravity modes found in Slowly Pulsating B-type stars <cit.> suggest Δ x / H_P ≃ 0.1, which is larger than 3D hydrodynamical simulations of low Mach number flows at stable interfaces indicate <cit.>.
The injection of fresh He into the convective core enhances the rate of energy production by the ^12C(α,γ)^16O reaction rate, increases the central mass fraction <cit.>, and modifies the lifetime through this phase of evolution.
The resulting increase in the radiative gradient can also lead to rapid growth in the convective He core boundary (a “breathing pulse”).
A consensus on breathing pulses being physical or numerical has not yet been reached <cit.>.
C22 found a pulsation signature of the reaction rate probability distribution function using evolutionary models that purposely excluded overshooting.
This article is novel in analyzing whether or not pulsation signals of the reaction rate probability distribution function
still exist when overshooting at the inner convective-radiative interface during core He burning (CHeB) is included in the models' evolution history. Here, the inner convective-radiative interface is the transition from the convective core to the exterior radiative layer.
Section <ref> describes our models,
<ref> analyzes our models,
<ref> discusses our results,
and we summarize our findings in <ref>.
Appendix A lists the microphysics used, and
Appendix B discusses variations with the number of isotopes in the reaction network and with the temporal resolution of our models.
§ STELLAR EVOLUTIONARY MODELS
We define the term “model” to mean an evolutionary sequence that begins at the pre-main sequence, progresses through CHeB, and terminates as a cold WD. We define the term “snapshot” to mean a specific instance in time or phase of evolution within a model, and the term “set” to mean a suite of models or snapshots that have identical input physics except for the value of the reaction rate.
We use version r15140
<cit.> to build 2.1 ,
Z = 0.0151 metallicity, Y = 0.266 He mass fraction, nonrotating models at the pre-main sequence.
We adopt the AGSS09 <cit.> abundances and use a 23 isotope nuclear reaction network with ^22Ne being the heaviest isotope[A comparison to a 30 isotope network is given in Appendix B.].
Our models employ 's Henyey mixing-length theory (MLT) of convection option, with an MLT parameter of α = 1.5. This is consistent with the value used in C22.
We use the Ledoux criterion, and the predictive mixing scheme.
Additional details of the microphysics are listed in Appendix A.
One such model is run for each 0.5σ step in the experimental reaction rate probability distribution function <cit.> from σ=-3.0 to σ=+3.0, giving 13 reaction rates.
As in C22, we span the current experimental reaction rate probability distribution function <cit.> from σ=-3.0 to σ=+3.0 in 0.5σ steps, totaling to 13 σ_i reaction rates; each model is prescribed one such σ_i reaction rate value for its evolution.
We calculate one set of models without overshooting (NOV), and a second set with overshooting (OV) at the inner radiative-convective interface during the CHeB phase.
Hence, each evolutionary model differs only in its σ_i reaction rate, and NOV or OV mixing prescription. This yields 26 individual stellar evolutionary models; 13 for the NOV set and 13 for the OV set. For i=(-3.0, -2.5,...,+2.5, +3.0), we use σ_i and σ=i interchangeably to reference a given σ from the reaction rate probability distribution function.
After CHeB, the models evolve until log(L/L_⊙)=3.0, prior to the first thermal pulse on the AGB. At this snapshot, we interrupt the evolution of each model. All models at this snapshot thus have a C→He transition at nearly the same mass location. We use this snapshot to construct H-dominated atmosphere (DA) WDs by removing the H envelopes until log(M_H/M_*)<-3.5.
The resulting composition profile structures are used to build 0.56 ab-initio WD models with wd_builder, as done in C22. These WD models evolve until = 10,000 K. We discuss the reasoning for constructing the WDs from the post-CHeB log(L/L_⊙)=3.0 snapshot in the following section.
We use this snapshot to isolate the sensitivity to overshooting at the convective-radiative interface.
At this snapshot the H envelopes are removed until log(M_H/M_*)<-3.5 to construct H-dominated atmosphere (DA) WDs.
These composition profile structures are used to build 0.56 ab-initio WD models with wd_builder, as done in C22. These WD models evolve until = 10,000 K.
We utilized version 6.0.1 of the code <cit.> to compute the adiabatic pulsations of our WD models throughout their respective cooling tracks (from ∼ 50,000 K to 10,000 K). We tracked the pulsations for the entire WD cooling track to observe the evolution of the adiabatic modes. Further, this was the most convenient way to auto-implement pulsation calculations for multiple models (i.e. we did not have to post-process the pulsation calculations over a specifed range for each of the 26 models). We emphasize that the computed pulsations are adiabatic, and that the observed instability strip for DAV WDs spans only from ∼ 13,000 K to ∼ 10,000 K. The inlist parameters were set to search for modes of harmonic degrees ℓ=1,2 and radial orders n≤25, where our models were assumed to be non-rotating, hence only m=0 azimuthal orders were present. For the adiabatic mode analysis, we employed the fourth-order Gauss-Legendre collocation difference equation scheme <cit.>.
Details of the models and oscillation parameters are in the files to reproduce our results at
at doi:[10.5281/zenodo.8126450][https://doi.org/10.5281/zenodo.8126450.
§.§ Core Overshooting prescription during the CHeB
During the CHeB phase, we use the following core overshooting parameters in the inlist for the OV set:
= 1d-3
= `exponential'
= `any'
= `core'
= 0.016
= 0.008
= 0.01
= 0.4
Details of the specific parameters are described in the documentation[<https://docs.mesastar.org/en/latest/>].
We choose the conventional <cit.> value of .
This parameter sets the fractional distance of H_p to overshoot at the ∇_ad=∇_rad interface, for the order of magnitude estimate given in the introduction, Δ x = f_0· H_p.
The trapped mode seismic signatures found in C22 were resonating most with the region that underwent radiative He burning, defined as R2. Their inner boundary of R2 is near the molecular weight gradient at the
O→C transition (the “O drop") and their outer boundary is near the C→He transition. Mode trapping is sensitive to the location of both of these boundaries because they define the width of the resonant cavity.
One approach to analyzing the sensitivity
of the R2 trapped mode signatures is to fix one boundary and vary the other boundary. We fix the R2 outer boundary by excluding variations imposed from the thermal pulse history, hence the interruption at the post-CHeB log(L/L_⊙)=3.0 snapshot for all models. The phenomena that happens during the AGB phase is another source of model uncertainty. <cit.> found that early post-AGB pulsations can cause rapid growth of an instability that drives a super-wind which can shed much of the outer layers in a few years. Further, their 2.0 , Z=0.02 model shows a dynamic evolutionary track, especially during the AGB, that is similar to the models in this article. <cit.> summarizes that while the preliminary results show promise on future AGB and post-AGB phenomenon, there are currently more questions than answers. We therefore leave the thermal pulse history and the particular envelope ejection phenomena on the AGB to future studies, and freeze the outermost R2 boundary before the first thermal pulse occurs. In this vein, we isolate the sensitivity of the R2 region to its inner boundary, and specifically address how core overshooting influences the pulsation signatures for the reaction rate probability distribution function.
We end this section by stating we are not advocating for a specific evolutionary model or overshooting scheme.
Rather, we are exploring one approach to quantifying the coupled uncertainty between the reaction rate probability distribution function and a common overshooting model.
§ RESULTS
§.§ Evolution of Composition Profiles
Figure <ref> shows the mass fraction profiles for both sets at three evolutionary snapshots. The top row shows the mass fraction profiles for the NOV set and the bottom row shows the mass fraction profiles for the OV set. The left most column
shows the mass fraction profiles at the post-CHeB log(L/L_⊙)>3.0 snapshot. At this point, our models have not lost much mass and are all ∼2.1. The middle column shows the mass fraction profiles after removing the H envelopes until log(M_H/M_*)<-3.5. This snapshot shows the initial hot WD profiles, after completing one model step in wd_builder. The profiles shift slightly in mass location, but the overall composition structure only differs from the left panel in the thickness of the H envelope. The right column is the final snapshot of the mass fraction profiles, when the models reach =10,000 K. Diffusion was included on the WD cooling track and leads to the smoothness of the profiles in this column.
Figure <ref> accentuates the differences between the NOV (top) and OV (bottom) mass fraction profiles for the final WD structures (right column of Figure <ref>). Here, we show the abundance in mass fraction with respect to fractional radius r/R. We partition the WDs' composition profiles into four regions: R1, R2, R3, and R4. This is similar to that done in C22. The regions are defined to estimate trapping (resonant) zones. Boundaries for mode trapping are typically near composition transitions because they generally have large mean molecular weight gradients. This may lead to partial reflections for a resonant mode(s), “trapping" it within the local cavity <cit.>. The Ledoux B profile (henceforth B) captures composition gradients and can estimate trapping regions. We use B as our primary guide to define the region boundaries for a given model. The R1-R2 boundary is set at the first local maximum in B that occurs after reaching peak in a given model's chemical profile. The R2-R3 boundary is set at the second local maximum in B. The R3-R4 boundary is set at the location where X(^1H)>X(^4He).
In both NOV and OV sets, σ_i impacts the magnitude of the ^16O and ^12C profiles in R1. Core overshooting changes the structure of these profiles, especially at r/R ∼ 0.37 where the flatness of the profiles becomes disrupted. This is due to additional He fuel ingested during CHeB, from overshooting and/or convection. The fuel ingestion from overshooting and convection is a coupled effect and specific to each σ_i model. After r/R ∼ 0.37, there is some overlap in the profiles that perturbs the proportional trend with σ_i.
For both sets, the first group of vertical blue lines marks the R1-R2 boundary, with each line representing a given σ_i. The NOV set shows a steep composition gradient at the R1-R2 boundary, and the R1-R2 location is nearly the same for all σ_i. There is greater variance in the R1-R2 location for the OV set. Further, core overshooting has softened the and gradients, and the disruption of the profiles' regularity with σ_i continues into the start of the R2 region. At r/R∼0.6, the proportionality of σ_i to the and profiles is restored.
By design from stopping at the first thermal pulse, the R3 and R4 regions are almost identical between the NOV and OV sets. These regions are least affected from mixing processes in the core (e.g. overshooting).
In Figures <ref> and <ref>, the OV chemical profiles show a non-constant structure from overshooting during CHeB in the O dominated central core (below ≃0.4 ). While element diffusion is included during the white dwarf cooling phase, these chemical profiles may be further flattened by mixing processes not considered in this study such as time-dependent convection <cit.>, rotationally induced mixing, semiconvection, thermohaline mixing, or first-order phase separation of the CO mixture <cit.>.
§.§ Evolutionary differences after the main-sequence
How do the NOV and OV differences in the R1 and R2 regions of Figure <ref> relate to their respective CHeB evolution histories?
How do the final WD profiles for the NOV and OV sets in Figure <ref> relate to their respective CHeB evolution histories? Figure <ref> shows the Kippenhahn diagrams for the σ = 0.0 models for NOV (left) and OV (right). This figure shows the CHeB phase until the log(L/)>3.0 termination point, spanning ≃ 0.93–1.10 Gyr. During this period the total mass of our models is ≃ 2.1 , but we show only the innermost ≃ 0.65 to capture the evolution history that ultimately defines the CO WDs.
There are immediate differences between the NOV and OV CHeB evolution histories for the σ=0.0 models. These differences are similar for any given σ_i models, and a link to an interactive figure is provided in the online journal to see each rate's OV vs. NOV comparison in greater detail.
For the NOV set, we see gradual growth of the convective core throughout the CHeB phase; the noted central mass fraction isotopes smoothly deplete/grow to reach their final mass fractions; the convective cores have no apparent splitting during the CHeB phase. Further, there is a pure radiative zone throughout the CHeB history. In comparison, the OV set shows convective cores that ebb and flow in their extent, in a saw-tooth like manner; overshooting extends past the inner convective core in a fairly consistent mass length; the OV central mass fraction isotopes ebb and flow symmetrically with the mixing phenomena at any given time.
We also see splittings of the convective core in the OV set. These splittings were not observed in any of the NOV models during the CHeB phase. We presume they are a result of overshoot inclusion. This introduces “pollution" to the pureness of the radiative burning zone, which becomes the R2 region of the WD. The pollution is seen by observing that some of the split-convection zone surpasses the log(L/)>3.0 R2 inner edge boundary. This boundary becomes the inner edge of R2 in the cool WDs. The amount of convective pollution within the OV set is minor for σ_0.0, but varies with σ_i.
Figure <ref> qualifies R2 as “Mostly Radiative" for the NOV set due to localized, short-lived, subtle convective occurrences between ≃ 0.30–0.35 near core He depletion energetics. Composition profiles are less sensitive to mixing after CHeB is complete. Any convective pollution from these brief convective periods in the NOV set are insignificant compared to the convective pollution introduced in the OV set.
For both sets, nuclear burning primarily takes place within the convective core. Both sets also show similar burning regions in the mantle outside the core, in the radiative zone. Near the end of core He depletion, nuclear burning in the core extends past the convective and overshooting core regions in the OV set, and burns into the radiative zone. This is not seen in the NOV set.
§.§ WD Adiabatic Pulsation Analysis
How do these evolutionary and WD structural differences impact the WD reaction rate pulsation signatures? We first stress the importance of the NOV models' R2 pure radiative zone during the CHeB. The trapped mode σ_i signature found in C22 resonates the most with this region.
We want to determine if this signature, or any other σ_i pulsation signature, exists when overshooting is considered at the inner R2 boundary during CHeB. First we compare the NOV WD pulsation signatures in this work to those in C22.
§.§ NOV set vs. C22
In this section we briefly describe the main differences between the NOV and C22 models. The models in C22 used a 30 isotope chemical network compared to the 23 isotope network used here. See Appendix B for a comparison. Also, the temporal resolution was greater in C22, especially through CHeB. The most important difference in the NOV models is that we terminated the evolution prior to the first thermal pulse; the models in C22 continued the evolution through the thermal pulse phase of evolution. The overall composition structure of the R1 and R2 regions in our NOV models are quite similar to those in C22.
The NOV set of models in this work found two WD g-mode signals for σ_i rather than one. This is shown in the top two panels of Figure <ref>. Both panels show snapshots of the percent period differences as a function of σ_i, at =11,500 K (bright green) and =10,000 K (blue) respectively. The y-axis label defines the period differences as (P_σ_0-P_σ_i)/P_σ_0. That is, they are normalized to the pulsation periods of the σ=0 NOV model. The first panel is the signal from g_2 and the second is the signal from g_6. In C22,
the g-mode signature was a trapped mode. Trapped modes are identified from local minima in the kinetic energy diagram <cit.>. The NOV kinetic energy diagrams for all σ_i at these snapshots are shown in the bottom left and right panels of Figure <ref>, following Equation 2 in C22
<cit.>. The figure caption explains the coloring for σ_i. At =11,500 K (bottom left panel), the first apparent trapped mode occurs at g_6 for all σ_i, with the exception of σ=0.5, which has its first local minimum of E_kin at g_5. By =10,000 K (bottom right panel), all σ_i have the first local minimum in E_kin at g_6, including σ=0.5. This is important as g_6 is one of our signature modes for σ_i. These findings are in overall agreement with C22.
The trapped g_6 mode signature is not linear with σ_i, but overall shows σ_i<0 to have longer periods than σ=0.0, and σ_i>0 to have shorter periods than σ=0.0.
The R2 contribution to the g_6 period in our NOV models was ∼ 25%. Other regions equally contributed between ∼ 20-30%, meaning that the trapped mode from our NOV set is more equitably trapped among the four regions. Thus, its credibility from R2 isn't as strong as in C22.
Nonetheless, it is not a negligible contribution and can still serve as a viable probe for σ_i.
Our other g-mode signal, g_2, does not appear to be trapped by definition (see other highlighted mode in bottom of Figure <ref>). However, the g_2 period differences are directly proportional to σ_i (first panel of Figure <ref>). This suggests that g_2 is likely distinguishing CO features in the inner regions better than other g-modes. The additional g_2 signal
was either recovered or contrived as a consequence of excluding the thermal pulse history in the evolution. This was the only procedural difference between our models and those in C22.
The direct impact of this procedural difference is expressed by the nearly uniform and profiles after the C→He transition (see Figure <ref>).
C22 showed variations in these profiles that stemmed from variations in the thermal pulse histories. Eliminating such chemical variations near the R2-R3 interface can placate the g-modes' sensitivity to the R3 and R4 regions, especially for low-order g-modes such as g_2. Figure 9 in
C22 shows g_2 distinguishes σ_i in their thinner atmosphere sequence of models. Thinner atmospheres may also lessen sensitivities to outer regions, allowing lower-order g-modes like g_2 to probe deeper into the CO interior. We therefore suspect g_2 is a viable probe for σ_i if there are uniform composition profiles at the R2-R3 boundary, and/or thinner WD atmosphere models.
We conclude that our NOV pulsation signature results are overall consistent with C22;
we find certain low-order adiabatic WD g-modes which probe the reaction rate probability distribution function. With our two signature modes established, we now discuss the impact that overshoot inclusion has on these pulsation signatures.
§.§ Detailed Analysis of Differences
We first show the pulsation periods as a function of surface temperature for all σ_i models in Figure <ref>. Black dots mark the NOV periods and grey dots mark the OV periods. G-modes with radial orders n=1-10 are annotated, all for ℓ = 1. Figure <ref> shows that there are differences in the periods between the NOV and OV sets, but there is no global systematic offset; the differences between the OV and NOV periods for any given g-mode is random. This is the case even when σ_i is constant. We find that g_6 shows the largest spread in the periods of the models. Further, the kinetic energy diagrams for all models show that g_6 was a trapped mode by =10,000 K for every model, regardless of the σ_i, NOV/OV prescription. Since g_6 is one of the signals for σ_i in the NOV models, we point out this feature in Figure <ref>. We will touch on the cause of the larger spread later, but now focus our attention on the detailed pulsation properties of the signature g_2 and g_6 modes.
Figure <ref> shows, from top to bottom, the mass fraction profiles, B, and the g_6 and g_2 mode weight functions ζ for the final WDs at =10,000 K. The left and right columns are the NOV and OV results respectively. Here, we show the comparison for σ=0.0, but an interactive figure link is provided in the online article to compare these properties for any σ_i. For all σ_i, NOV/OV comparisons, the dotted vertical lines mark the region boundary locations in each panel. This is useful to compare where the boundary locations are across multiple profile properties. For instance, the R1-R2 boundary marks the C→O transition region, the first most prominent peak in B, and the first peak-like features in g_6 ζ and g_2 ζ in the NOV case. Comparing the OV column to the NOV column, we see the global impacts from overshooting. Overall, prominent features in the NOV set are lessened in magnitude in the OV set. The C→O transition is more gradual, lessening the composition gradient at the defined boundary. This remarkably impacts the shape of B. The first prominent peak after max(O) is much less in magnitude for all σ_i, and is not the only outstanding peak near the boundary. There are now multiple, smaller peaks in B and the g_6 ζ near the R1-R2 boundary as opposed to one.
There are slight deviations between NOV and OV in these profiles for the R3 and R4 regions of the WD, but the R1→R2 region in these profiles was affected most.
The g_6 ζ and g_2 ζ panels in Figure <ref> note the weight percentages per region in the WD. This tells each region's contribution to the overall mode period (frequency). An interesting result for all σ_i is that both the g_2 and g_6 modes decrease the amount of weight in R1 when overshoot is included, and increase the amount of weight in R2. There is also a slight decrease in the weight of R3 for g_2 for all σ_i when overshoot is included. These results are important. The R2 region is the most reliable region in terms of extracting the σ_i rate signature. When overshoot is included, the R2 contribution to the overall pulsation modes in g_2 and g_6 are accentuated, implying that these modes more reliably distinguish σ_i than the NOV set. A quantitative analysis of each region's weight percentage contribution per σ_i is given for both sets in Table <ref> and Table <ref> for g_2 and g_6 respectively. Overall, Table <ref> shows that R2 and R3 are the most heavily weighted regions for g_2's period. G_6 has more equitable weight dispersed across regions, but the combined weight of R1 and R2 accounts for ∼ 50 % of the g_6 period for any given model. As identified in Figure <ref> and Figure <ref>, R1 and R2 are the most impacted regions in this study. A g-mode with about half its weight from those regions may pick up the detailed differences more so than modes weighted more in outer regions. This may explain why Figure <ref> shows a larger spread in the g_6 periods as this g-mode is likely picking up the R1 and R2 contributions to its period better than other g-modes.
|c|c|c|c|c|c|c|c|c|[!htb]
9
g_2 Weight Function Percentages Per WD Region
1|c|
2c|R1
2c|R2
2c|R3
2c|R4
1|c|σ_i
1cNOV
1c|OV
1cNOV
1c|OV
1cNOV
1c|OV
1cNOV
1c|OV
-3.0 0.91 0.75 40.6 41.3 57.0 56.4 1.47 1.47
-2.5 1.14 0.99 40.2 44.2 57.2 52.9 1.43 1.94
-2.0 1.05 0.52 40.2 41.1 57.2 56.9 1.54 1.53
-1.5 1.18 0.53 39.5 41.7 57.9 56.2 1.50 1.50
-1.0 1.16 0.27 40.4 41.5 56.9 56.8 1.48 1.46
-0.5 1.15 0.18 38.8 42.1 58.6 56.3 1.43 1.49
0.0 1.25 0.38 40.6 42.0 56.6 56.1 1.52 1.47
0.5 1.44 0.49 40.8 41.9 56.2 56.2 1.52 1.47
1.0 1.28 0.31 40.4 41.4 56.9 56.7 1.49 1.58
1.5 1.32 0.28 39.9 41.4 57.2 56.8 1.50 1.51
2.0 1.35 0.19 39.4 40.8 57.8 57.5 1.50 1.49
2.5 1.25 0.42 38.3 41.6 58.9 56.6 1.47 1.45
3.0 1.39 2.06 40.2 39.6 56.9 56.8 1.59 1.52
|c|c|c|c|c|c|c|c|c|[!htb]
9
g_6 Weight Function Percentages Per WD Region
1|c|
2c|R1
2c|R2
2c|R3
2c|R4
1|c|σ_i
1cNOV
1c|OV
1cNOV
1c|OV
1cNOV
1c|OV
1cNOV
1c|OV
-3.0 25.5 20.1 25.6 32.4 21.1 19.8 27.8 27.8
-2.5 33.1 19.1 29.5 33.5 13.1 20.2 24.2 24.2
-2.0 32.3 16.6 30.8 36.3 13.9 19.7 23.0 23.0
-1.5 33.5 17.3 29.6 39.1 12.6 17.3 24.4 24.4
-1.0 33.8 13.4 30.0 43.1 12.9 17.4 23.3 23.3
-0.5 33.5 11.7 29.8 47.5 12.8 14.9 23.9 23.9
0.0 33.2 15.4 28.9 42.8 12.0 15.5 25.9 25.9
0.5 26.6 16.4 22.5 41.0 13.8 14.0 37.1 37.1
1.0 31.2 14.1 27.1 43.8 12.4 16.1 29.3 29.3
1.5 32.2 13.7 27.4 46.7 12.2 14.7 28.3 28.3
2.0 25.5 11.7 23.0 48.1 14.1 14.3 37.3 37.3
2.5 30.9 14.2 28.0 42.5 12.5 13.8 28.6 28.6
3.0 30.1 32.0 25.5 26.2 12.4 13.8 32.0 32.0
When an integer multiple q of the local radial wavelength λ_r for a given g-mode nearly matches the width of a certain region(s) in a star, the g-mode resonates with that region(s). Figure <ref> shows q·λ_r (R_⊙) as a function of radius R (R_⊙) for the g_2 and g_6 modes. The NOV set doesn't show any particular close matches for any region. But the closest matches to the R2 width were the λ_r curves of g_2, q=1, and g_6, q=2. Further, the g_2, q=2 and g_6, q=3 modes were best at resonating with R3. Larger q values may show stronger resonance with R4. The resonance with R2 is enhanced in the OV set. The g_2, q=1 and g_6, q=2 λ_r curves match much more closely to the R2 width in the OV set. This implies that overshoot has enhanced the g-mode resonance for our signature modes in the region that was constructed mainly from radiative burning (Figure <ref>). We also see stronger resonance within the R1 region with the g_2, q=1 λ_r curve.
Will the differences between the NOV and OV sets in Figure <ref> impact the WD σ_i pulsation signatures shown in Figure <ref>? Figure <ref> shows the resulting relative period percent differences, as a function of σ_i at =11,500 K (bright green) and =10,000 K (blue). The period differences are negative for σ_i with longer periods than the σ=0 model, and are positive for σ_i with shorter periods than the σ=0 model for the given NOV or OV set. The left of this figure shows the period differences for g_2, and the right shows the period differences for g_6. The NOV set is indicated by the dotted lines and the OV set is the solid lines.
Looking at g_2, the period differences between NOV and OV at =11,500 K are minimal; both sets show a trend of decreasing period with increasing σ_i. At =10,000 K, the OV set shows an overall decrease in the percent differences, and a slightly greater variation in the overall σ_i vs. g_2 period difference shape. However, at both temperatures, the same pattern of the g_2 period decreasing with increasing σ_i is sustained with overshoot inclusion.
Further, the magnitude of percent differences, ranging from ≃ -1.5 to +1.0, is within the detectable threshold <cit.>.
The OV set shows greater deviation from the NOV line of period percent differences in g_6 more-so than g_2. This is most likely because g_6 is more sensitive to changes from R1 than g_2. Nonetheless, despite the σ_=-0.5 and σ_+1.0 outliers, the overall trend remains: σ_i<0 generally have longer periods than σ_0 and σ_i>0 generally have shorter periods than σ_0. Once again, the magnitude of the relative period percent differences surpass the observable threshold.
An interesting note is that for both g_2 ad g_6 signals, the percent differences change more in the NOV set as the models cool from =11,500 to =10,000 K than the OV set. The OV set showed nearly the same period differences at both temperatures.
§ DISCUSSION
C22 found pulsation signature(s) for the experimental reaction rate probability distribution function. They describe four sensitivities that may impact this result: width of the O→C transition, mixing during CHeB, thermal pulse history on the AGB, and the 3α reaction rate.
This work investigated the impact that overshoot inclusion had on the reaction rate pulsation signature(s). Doing so, we address the width of the O→C transition and mixing during CHeB. Further, by ignoring the thermal pulse history in our models, we also address the sensitivity to the number of thermal pulses, albeit, the trivial case when the number of thermal pulses is zero. In the following paragraphs, we discuss how these three sensitivities impacted our results. We further caution how our results could be impacted from further sensitivity investigations.
Including overshooting overall increased the width of the O→C transition for all σ_i cool WDs. This lessened the sharp peak in B at the O→C transition, and decreased the peak in g_6 ζ at the O→C transition. While the transition peak was lessened and dispersed into R2, widening the O→C transition shows an enhancement of both the weight contribution to the R2 region for g_2 and g_6, and the R2 resonance with λ_r for g_2 and g_6. The widening of the O→C transition was from the combined effects of overshoot inclusion and the σ_i prescription. We conclude that widening the O→C transition imposes differences in B, ζ, and the pulsation periods. Despite these changes, we still find the g_2 and g_6 relative period differences in the NOV and OV sets to distinguish the reaction rate probability distribution function. Namely, the pattern of decreasing period with increasing σ_i persisted in both NOV and OV sets. By itself, the inclusion of overshooting does not destroy the seismic signatures of the reaction rate in our WD models – which was the primary question of this study.
We caution that increasing (decreasing) the width of the O→C transition in CO WD models could potentially yield different results. Our CO WD models were informed from their evolution history, with the stated model parameters. Thus, an increase (decrease) of the width of the O→C transition may come from choosing different mixing processes, prescriptions and parameters, such as for convection and overshooting. A change in the width of the O→C transition may also come from mixing processes not considered in this study such as
time-dependent convection <cit.>, rotationally induced mixing, semiconvection, thermohaline mixing,
or first-order phase separations of the CO mixture <cit.>.
Ignoring the thermal pulse history gave an additional low-order adiabatic g-mode signature for σ_i, namely the g_2 signal. This signal was not found in C22, where the thermal pulse history was included. Future studies on the thermal pulse phase of evolution with different temporal and spatial resolutions are needed to determine the sustainability of the g_2 signal as a probe for σ_i.
Concurrently, future studies could also explore the interaction, if any, between the thermal pulses and overshooting during CHeB on the chemical profiles.
The CO cores of WDs are the result of the competition between 3α and during CHeB. An experimental 3α reaction rate probability distribution function, similar to the existing one for
<cit.>, does not yet exist to our knowledge, although a probability distribution function could be constructed using the STARLIB reaction rate library <cit.>.
Future studies involving both reaction rate probability distribution functions could probe properties of DAV WD models in the 3α rate - rate plane. For example, the 3α reaction rate is likely to slowly modulate the central ^16O mass fraction at any reaction rate because 3α controls the production of ^12C. The reaction rate will likely modulate the central ^16O mass fraction more strongly at any 3α reaction rate. We speculate that the radiative region R2 will exist in all such models. We also suspect that all such models, whether terminated at the first thermal pulse or evolved through the thermal pulse phase, will show a trapped mode, with substantial trapping from R2, that best probes the ^12C(α, γ)^16O burning reaction rates (i.e. g_6 in this work, and see Figure 9 in C22). We caution that the relative period shifts we find in this work from considering the probability distribution and overshooting may change when a 3α reaction rate probability distribution function is also considered.
<cit.> found that including overshooting impacted ensuing WD pulsations by ∼ 2-5 s.
Their results were independent of their reaction rate uncertainty evaluation. We combined the effects of overshooting and the reaction rate sensitivities in our pulsation analysis, and likewise find period differences of similar magnitudes. Our reaction rate analysis spanned the current experimental probability distribution function, which analyzed different rate values than those explored in <cit.>. They concluded that the uncertainty was less relevant than overshooting. In this study, we find that the combined effects from overshooting and the reaction rate probability distribution function yields remarkable differences in the structure of the CO WDs, and pulsation differences. Despite these differences, we still find pulsation signatures for σ_i.
We conclude this section by discussing the physical meaning of our results. Overall, both g_2 and g_6 signatures generally state that the periods decrease with increasing σ_i. Put another way, increasing the amount of in the WDs shortens the periods of these signature modes. This trend was also seen in <cit.>, namely, as the amount of [22] was increased in the WDs, the periods, for all g-modes analyzed, were shorter. The reasoning of the result came from analyzing the components of the frequency equation. One of the largest drivers of the period differences was due to an increase in pressure scale height with increasing [22] abundance. If one likens pressure scale height to tension in a string, increasing the tension in a string will shorten its period. WDs are not strings, but the line of reasoning is analogous.
One might wonder why not all g-modes display this trend? Why is it only g_2 and g_6? In <cit.>, the presence/absence of [22] was throughout ∼99% of the WD's composition structure. Thus, a uniform increase (decrease) in [22] impacts all regions of the WD equally, which is likely the reason for the global offsetting of periods for all g-modes. In comparison, increasing and decreasing the reaction rate imposes a coupled effect on both and , which is not uniform for all regions in a WD's structure. The R1 and R2 regions are most affected by the reaction rate, with some impact on the inner part of the R3 region. Our above analysis found that the R1 and R2 regions gave larger contributions to the the periods of the g_2 and g_6 signature modes more-so than other g-modes. This is the most probable reason why only certain modes are capable of distinguishing the reaction rate, within the conditions of the present analysis.
§ SUMMARY
We conducted a search for signatures of the current
experimental reaction rate probability distribution function in the pulsation periods of CO WD models with the inclusion of overshooting. We found two signature adiabatic g-modes that show period differences with the reaction rate probability distribution function σ_i trend regardless of whether or not overshoot is included. We find a g_2 period difference signature is inversely proportional to σ_i. Without overshoot, the g_2 relative period differences span ± 0.9%. With overshoot, the g_2 relative period differences range from -1.33% to 0.47%. The average magnitude of the relative period differences for g_2 were 0.46% and 0.44% respectively. The g_6 period differences were larger in magnitude, spanning from -3.44% to 1.78% for NOV and -2.02% to 1.58% for OV. The average magnitude of the g_6 period differences were 1.21% and 0.95% respectively. The average magnitudes of the g_2 and g_6 period differences were slightly decreased from the NOV set.
We found that the R2 weight contribution to these g-modes was enhanced with overshoot inclusion. The R2 region remains the best identifying region for tracing the reaction rate probability distribution function. This is because even with overshoot inclusion, it is predominantly constructed by radiative burning during CHeB.
Regardless of whether or not overshooting is considered, we find:
* two signature g-modes, g_2 and g_6 probe σ_i
* g_2 is inversely proportional to σ_i and g_6 is a trapped mode
* the g_2 and g_6 periods are generally shorter for positive σ_i and longer for negative σ_i
* both signatures have period deviations within the detectable regime
These findings suggest that an astrophysical constraint on the reaction rate probability distribution function remains, in principle,
extractable from the period spectrum of observed variable WDs.
§ ACKNOWLEDGEMENTS
We thank James Deboer for sharing the ^12C(α,γ)^16O probability
distribution function, Josiah Schwab for sharing wd_builder,
and Pablo Marchant for sharing mkipp.
We acknowledge using ChatGPT <cit.> to polish the language of one paragraph <cit.>.
This research is supported by NASA under the Astrophysics Theory Program grant NNH21ZDA001N-ATP, and in part by the National Science Foundation under Grant No. NSF PHY-1748958.
This research made extensive use of the SAO/NASA Astrophysics Data System (ADS).
<cit.>,
20190830 <cit.>,
wd_builder <https://github.com/jschwab/wd_builder>,
<cit.>,
mkipp <https://github.com/orlox/mkipp>,
<cit.>,
<cit.>, amd
<cit.>.
§ MICROPHPYSICS IN MESA
The MESA EOS is a blend of the OPAL <cit.>, SCVH
<cit.>, FreeEOS <cit.>, HELM <cit.>,
PC <cit.>, and Skye <cit.> EOSes.
Radiative opacities are primarily from OPAL <cit.>, with low-temperature data from <cit.>
and the high-temperature, Compton-scattering dominated regime by
<cit.>. Electron conduction opacities are from
<cit.> and <cit.>.
Nuclear reaction rates are from JINA REACLIB <cit.>, NACRE <cit.> and
additional tabulated weak reaction rates <cit.>. Screening is included via the prescription of <cit.>.
Thermal neutrino loss rates are from <cit.>.
§ MODEL OPTIMIZATION AND RESOLUTION
§.§ Reduced Chemical Network
The nature of our evolutionary models is computationally expensive. This paper is concerned about overshooting and the reaction rate probability distribution function, which primarily dictate the evolutionary processes and consequences of the CHeB phase. The isotopes most impacted during CHeB are , , and . and are the next two most impacted isotopes during CHeB. We thus optimize the efficiency of our models by reducing the chemical network number of isotopes from 30 to 23. The eliminated isotopes are ^21Ne, ^21,22,23Na, ^23,24Mg, and ^56Fe. A comparison of the resulting inner mass fraction profiles for the 5 most abundant isotopes for both networks is shown in Figure <ref> for each chemical network. This figure shows the profiles at the completion of CHeB. both network models used the same temporal and spatial resolution during CHeB. The run-time was reduced from a few days to a a few hours on 12 cores. All resolution studies were conducted with σ=0.0 without overshoot (NOV).
Reducing the network impacted [22] most, with an offset of ∼ 22% more [22] in the 23 isotope network. We note that C22 used a 30 network and our overall signature results persistent through variations in heavier isotopes.
§.§ Temporal Resolution
Several timestep limiters in help optimize convergence studies. In this paper, we want to limit the timestep to achieve the temporal resolution that yields a smooth evolution of the central , , and abundances during CHeB. We first utilize the delta_XC_cntr_limit limiter. This limits the amount the central abundance can change in a given timestep. To help optimize computational run-time, we begin limiting the change in central during CHeB which the central helium abundance X(_c)<0.6. This is done by adding the following lines of code in the run_star_extras.f90 file:
This temporal resolution was used for the 30 and 23 isotope network models. We refer to it as resolution A. The remaining temporal resolution studies were performed using the 23 isotope chemical network.
The next iteration of increased temporal resolution modified the run_star_extras.f90 file to include the following:
This resolution is employed slightly earlier during CHeB, when X(_c)<0.5. We added limits to the change in central temperature and density from resolution A. This is resolution B.
Our third resolution iteration used the following limiter controls in the run_star_extras.f90 file:
This is resolution C. We have set the limiters at the start of CHeB, and have decreased the limiter values from those in resolution B.
A comparison for resolutions A, B, and C are shown in Figure <ref>. In each column, the solid light curves represent resolution A, the dotted curves B, and the dark solid curves C.
The left figure shows the evolution of central abundances of , , and during CHeB, starting when X(_c)≲0.6 until the completion of CHeB. The central abundances for resolutions A and B are nearly identical. Resolution C varies slightly, with the final central abundance reaching a slightly larger amount than resolutions A and B. Further, all three resolutions show a smooth evolution of these central abundances throughout CHeB.
The middle plot in Figure <ref> shows the mass fraction profiles at the completion of CHeB. We show the 5 most abundant isotope profiles for each resolution. The and profiles for A are noticeably different than the profiles for B and C, especially after the O→C transition. This is more apparent in the right plot of Figure <ref>, which zooms in on the and profiles of the three resolutions. Resolution B follows A in the core, but then more closely aligns with C after the O→C transition. Since resolutions B and C agree well, with only a slight difference in the central and abundance, we set resolution C as the standard temporal resolution for our 13 models.
aasjournal
|
http://arxiv.org/abs/2307.04520v1 | 20230710124155 | Efficient Match Pair Retrieval for Large-scale UAV Images via Graph Indexed Global Descriptor | [
"San Jiang",
"Yichen Ma",
"Qingquan Li",
"Wanshou Jiang",
"Bingxuan Guo",
"Lelin Li",
"Lizhe Wang"
] | cs.CV | [
"cs.CV"
] |
Efficient Match Pair Retrieval for Large-scale UAV Images via Graph Indexed Global Descriptor
San Jiang,
Yichen Ma,
Qingquan Li,
Wanshou Jiang,
Bingxuan Guo,
Lelin Li,
and Lizhe Wang
S. Jiang, Y. Ma, and L. Wang are with the School of Computer Science, China University of Geosciences, Wuhan 430074, China; S. Jiang is also with the Guangdong Laboratory of Artificial Intelligence and Digital Economy (Shenzhen), Shenzhen 518060, China, and with the Hubei Key Laboratory of Intelligent Geo-Information Processing, China University of Geosciences, Wuhan 430078, China. E-mail: [email protected], [email protected], [email protected]. (Corresponding author: Lizhe Wang)
Q. Li is with the College of Civil and Transportation Engineering, Shenzhen University, Shenzhen 518060, China, and also with the Guangdong Laboratory of Artificial Intelligence and Digital Economy (Shenzhen), Shenzhen 518060, China. E-mail: [email protected].
W. Jiang and B. Guo are with the State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan 430072, China. E-mail: [email protected], [email protected].
L. Li is with the Provincial Key Laboratory of Geo-information Engineering in Surveying, Mapping and Remote Sensing, Hunan University of Science
and Technology, Xiangtan 411201, China. E-mail: [email protected].
August 12, 2023
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
SfM (Structure from Motion) has been extensively used for UAV (Unmanned Aerial Vehicle) image orientation. Its efficiency is directly influenced by feature matching. Although image retrieval has been extensively used for match pair selection, high computational costs are consumed due to a large number of local features and the large size of the used codebook. Thus, this paper proposes an efficient match pair retrieval method and implements an integrated workflow for parallel SfM reconstruction. First, an individual codebook is trained online by considering the redundancy of UAV images and local features, which avoids the ambiguity of training codebooks from other datasets. Second, local features of each image are aggregated into a single high-dimension global descriptor through the VLAD (Vector of Locally Aggregated Descriptors) aggregation by using the trained codebook, which remarkably reduces the number of features and the burden of nearest neighbor searching in image indexing. Third, the global descriptors are indexed via the HNSW (Hierarchical Navigable Small World) based graph structure for the nearest neighbor searching. Match pairs are then retrieved by using an adaptive threshold selection strategy and utilized to create a view graph for divide-and-conquer based parallel SfM reconstruction. Finally, the performance of the proposed solution has been verified using three large-scale UAV datasets. The test results demonstrate that the proposed solution accelerates match pair retrieval with a speedup ratio ranging from 36 to 108 and improves the efficiency of SfM reconstruction with competitive accuracy in both relative and absolute orientation.
structure from motion, 3D reconstruction, match pair selection, unmanned aerial vehicle, feature matching
§ INTRODUCTION
UAV (Unmanned aerial vehicle) images have become one of the primary data sources for surveying and mapping in photogrammetry and remote sensing (RS). Compared with satellite and aerial-based RS platforms, UAVs have the characteristics of high flexibility, high timeliness, and high resolution <cit.>. UAV images have been widely exploited in various applications, e.g., urban 3D modeling <cit.>, transmission line inspection <cit.>, and precision agriculture management <cit.>. With the increasing endurance of UAV platforms and the explosive usage of multi-camera instruments, efficient image orientation for large-scale UAV images has become one of the most critical modules for photogrammetric systems <cit.>.
SfM (Structure from Motion) has become a well-known technology for recovering camera poses and 3D points without the requirement of their good initial values <cit.>. SfM has been extensively adopted in 3D reconstruction <cit.> for both ordered and unordered UAV images. In the workflow of SfM, a view graph is a basic structure to guide feature matching and parameter solving, which is defined as an undirected weighted graph with the vertices and edges indicating images and their overlap relationships <cit.>. Retrieving match pairs is pre-required in view graph construction. The purpose of match pair retrieval is to find overlapped image pairs to guide subsequent feature matching, which increases the reliability and efficiency of SfM reconstruction. Thus, retrieving appropriate match pairs efficiently and accurately becomes one of the core issues in SfM for large-scale UAV images.
In the literature, existing methods for retrieving match pairs can be divided into two categories, i.e., prior knowledge-based and visual similarity-based methods. The former depends on prior information, such as the sequential constraint in data acquisitions <cit.> or depends on prior data from onboard POS (Positioning and Orientation System) sensors <cit.> to calculate image ground footprints. Although these methods are very efficient, their usage is limited to the special configurations of data acquisition or depends on the precision of the prior data from used RS platforms. Without relying on other auxiliary data, visual similarity-based methods merely use images to calculate similarity scores between two images and determine overlapped match pairs by selecting images with the highest similarity scores. The most commonly used solution is CBIR (Content-Based Image Retrieval). The core idea of CBIR is to encode detected local features, e.g., SIFT (Scale Invariant Feature Transform) <cit.>, into high-dimension vectors, and the problem of retrieving match pairs is then cast as calculating the similarity score between two of these high-dimension vectors <cit.>. In the fields of photogrammetry and computer vision, vocabulary tree <cit.> based image retrieval has become the most classic method that converts local features into high-dimension BoW (Bag-of-Words) vectors <cit.>.
In vocabulary tree-based image retrieval, the similarity calculation uses an inverted index that establishes the relationship between visual words and corresponding local features <cit.>. However, building the inverted index is time-consuming for high-resolution and large-size UAV images. On the one hand, high-resolution UAV images lead to tens of thousands of local features from an individual image, which causes high computational costs in searching the nearest visual word via ANN (Approximate Nearest Neighbor) searching; on the other hand, large-volume UAV images requires an extremely large codebook to increase the discriminative ability of aggregated BoW vectors, which causes the millions of vector dimensions and further increases computational costs in ANN searching. In addition, the codebook is usually created offline from public datasets due to the high time costs of generating a large codebook. Thus, this study proposes an efficient and accurate solution for match pair retrieval. The core idea is to adopt a global descriptor for image representation and explore graph indexing for efficient ANN searching of high-dimension vectors. Our main contributions are summarized: (1) An individual codebook is trained online using random selection and scale restriction strategies to reduce image and feature redundancies. (2) Local features of each image are aggregated into a high-dimension global descriptor through a VLAD (Vector of Locally Aggregated Descriptors) aggregation that extremely reduces the number of features and the burden of nearest neighbor searching in image indexing. (3) VLAD descriptors are indexed into an HNSW (Hierarchical Navigable Small World) based graph structure for the ANN (Approximate Nearest Neighbor) searching, and match pairs are retrieved using an adaptive threshold selection strategy, which is used to create a view graph to for divide-and-conquer based parallel SfM reconstruction. (4) The performance of the proposed solution is verified by using large-scale UAV images and compared with other well-known software packages.
The structure of this study is organized as follows. Section <ref> gives a literature review of match pair retrieval and nearest neighbor searching. Section <ref>presents detailed procedures of the proposed match pairs retrieval algorithm and the workflow of the parallel SfM solution. Section <ref> conducts a comprehensive evaluation and comparison using UAV datasets. Finally, Section <ref> gives the conclusion of this study and improvements for future research.
§ RELATED WORK
This study focuses on match pair retrieval to improve the efficiency of SfM reconstruction. Thus, this section reviews match pair selection and nearest neighbor searching.
§.§ Prior knowledge-based methods
For photogrammetric data acquisition, there are usually two categories of prior knowledge, i.e., the configuration for data acquisition and the auxiliary data from onboard sensors. For the former, image match pairs are usually obtained according to the timestamp or data acquisition sequence <cit.>. According to this principle, Cheng et al. <cit.> proposed a strategy to connect sequential images for image localization and stereo-pair dense matching, which uses the optical images sequentially acquired by UAV to achieve the real-time 3D reconstruction of disaster areas. For the latter, image match pairs are usually obtained according to camera mounting angles or onboard POS (position and orientation system) data. Using the projection center of images, Rupnik et al. <cit.> searched the neighboring images close to the target image within the specified distance threshold. After acquiring the orientation data provided by the POS data of onboard navigation systems, image footprints on a specified elevation plane can be calculated, and image match pairs can be obtained through the pairwise intersection test between the image footprints <cit.>. In the work of <cit.>, ground coverages of images are calculated by using POS data, and image match pairs are determined by judging the intersection of ground coverages. Although these methods have high efficiency, their accuracy depends on the used prior knowledge.
§.§ Visual similarity-based methods
Compared with prior knowledge-based methods, these methods make match pairs selection using the images' content instead of prior knowledge. These methods can be grouped into two categories: the first is based on the number of matched correspondences, while the second uses the similarity score computed from image descriptors. For the former, two images are labeled as a valid match pair when the number of matches surpasses a threshold, such as the multi-scale strategy <cit.> and the preemptive matching strategy <cit.>. For the latter, images are quantified as descriptors, and the similarity score between two images is calculated as the distance between two descriptors. One of the most classic methods is vocabulary tree-based image retrieval <cit.>. Using a trained vocabulary tree, this method quantizes extracted local features into word frequency vectors, i.e., BoW (Bags-of-Words) vectors. The distance between the vectors represents the similarity score between the images <cit.>. These methods can quickly obtain correct match pairs on small datasets, which is inefficient for large-scale datasets. In addition to the above-mentioned methods, neural network-based methods have been proposed recently. Yan et al. <cit.> proposed a match pair selection method based on the GCN (Graph Convolutional Network) and used it to judge whether overlapping areas exist between images. This method performed remarkably well on challenging datasets from ambiguous and duplicated scenes. However, the efficiency is very low for high-resolution UAV images.
§.§ Nearest neighbor searching
NN searching aims to find the vectors closest to the query vector from a large set of database vectors. In the context of match pair selection, the NN searching in vocabulary tree-based image retrieval is solved as an ANN searching problem, which determines the efficiency of image retrieval. In the literature, existing ANN searching methods can be divided into three categories, i.e., tree-based methods, hashing-based methods, and graph-based methods. Tree-based methods use a tree structure to partition the searching space, and KD-Tree is one of the most well-known data structures <cit.>, which has been used extensively for image retrieval algorithms <cit.> and software packages, e.g., the COLMAP <cit.> and AliceVision <cit.>, because of the relative low dimension of used feature descriptors, such as the 128-dimension SIFT (Scale Invariant Feature Transform) descriptor. However, the efficiency of tree-based methods decreases dramatically for high-dimension vectors, which is not better than brute-force searching. To increase ANN searching efficiency, hashing-based methods convert continuous real-value vectors to discrete binary codes using hashing functions. In this category, LSH (Locality-Sensitive Hashing) attempts to hash similar vectors into the same cell with high probabilities <cit.>. Consequently, ANN searching can be executed in the cell that the query vector also falls in. Compared with the tree-based method, the hash operation reduces high-dimensional input vectors to low-dimensional terms by using a set of hash functions whose number is much smaller than the dimension of input vectors. This is useful to avoid the curse of dimensionality in tree-based methods. Due to their high efficiency, LSH-based methods have been used for large-scale image retrievals, such as web community and remote sensing images <cit.>. These methods, however, have lower precision caused by the usage of binary hashing encoding as well as high memory consumption to store hashing functions. In contrast to splitting the searching space, graph-based methods create a graph data structure to organize database vectors and achieve efficient ANN searching based on graph indexing. NSW (Navigable Small World) <cit.> and HNSW (Hierarchical NSW) <cit.> are two typical graph-based methods. NSW adopts an approximation of the Delaunay graph, which has the same operation for vertex insertion and query. NSW can achieve efficient and accurate searching based on long-distance edges that are created at the beginning, which forms a small navigable world and reduces the number of hops. HNSW is an improved version of NSW, which builds a multi-layer structure to speed up ANN searching. In the work of <cit.>, HNSW has been used to replace the KD-Tree in image retrieval, and good acceleration has been achieved in match pair selection. However, unacceptable time consumption is still required for processing large-scale UAV images due to a large number of local features.
§ METHODOLOGY
This study proposes an efficient and accurate match pair retrieval method for large-scale UAV images and implements a parallel SfM solution guided by the view graph constructed from retrieved match pairs. The core idea is to use global descriptors for image representation and explores a graph indexing structure for the ANN searching of high-dimension vectors. The workflow of the complete SfM reconstruction is shown in Figure <ref>, in which the inputs are UAV images without other auxiliary data. First, a codebook is trained online by selecting a subset of UAV images and scale-restricted features. Second, with the aid of the codebook, each image's local features are aggregated into a single high-dimension vector according to VLAD. Third, VLAD vectors are then indexed into an HNSW-based graph structure to achieve highly efficient ANN searching, and match pairs are retrieved based on the HNSW index and refined by using an adaptive selection strategy. Finally, after feature matching guided by the retrieved match pairs, a weighted view graph is constructed, which is used for the scene partition and parallel SfM reconstruction of large-scale UAV images.
§.§ Vocabulary tree-based image retrieval
Vocabulary tree-base image retrieval mimics the text retrieval that encodes a document as a feature vector by using trained words and casts document searching as the distance calculation between feature vectors <cit.>. The most important techniques are the inverted file for the word-image indexing and the TF-IDF (Term Frequency and Inverse Document Frequency) for the weighting of similarity scores <cit.>.
The workflow of vocabulary tree-based image retrieval consists of four major steps. First, local features with descriptors, e.g., SIFT, are extracted from training images; second, a vocabulary tree is hierarchically built from the extracted descriptors by using a clustering algorithm, e.g., the K-means, whose leaf nodes indicate the generated visual words; third, all images are indexed by searching the nearest visual word for all extracted feature descriptors, and an inverted file is simultaneously built for each visual word, which builds the indexing relationship between visual words and image features; finally, the same indexing operation is executed for an input query image, and the similarity score between the query and database images can be calculated by using their corresponding BoW vectors. Suppose there is a vocabulary with V words, and each image can be represented by a BoW vector v_d=(t_1,...,t_i,...,t_V). The component t_i is calculated according to Equation <ref>
t_i=n_id/n_dlogN/N_i
where n_id and n_d indicate the occurrence number of the word i in the image d and the total number of words in image d, respectively; N_i is the number of images that contain word i, and N is the total number of images in the database. The component t_i includes two parts, i.e., the term frequency (TF) n_id/n_d and the inverse document frequency (IDF) log(N/N_i), which indicate the occurrence frequency of the word i in the image d and the importance of word i among database images. After generating the BoW vectors, the similarity score of any two images can be quantified as the dot production of corresponding BoW vectors.
With the increasing of involved database images, vocabulary tree-based image retrieval efficiency decreases dramatically. The main reason is building the inverted index. On the one hand, the high resolution of UAV images leads to a large number of extracted features that cause high computational costs in the ANN searching to build the inverted file; on the other hand, with the increasing of database images, a larger codebook with more visual words must be used to increase the discriminative power of BoW vectors, which further increases the burden in the ANN searching and subsequent similarity calculation. Therefore, considering these issues, this study proposes an efficient image retrieval solution that combines the VLAD descriptor and the HNSW indexing. The former aggregates local feature descriptors into a high-dimensional global vector using a very small codebook, which avoids the high computational costs in image indexing; the latter is utilized to accelerate the ANN searching for high-dimensional VLAD vectors. This study would integrate the proposed solution with a parallel SfM workflow for large-scale image orientation. The details are described in the following sections.
§.§ Codebook generation considering image and feature redundancy
Local features are first detected from UAV images as training data. In recent years, UAV images have been capable of recording building facades and observing ground targets from multi-view directions. Due to the large differences in viewing directions and the obvious changes in illuminations and scales, feature matching becomes non-trivial for oblique UAV images <cit.>. Considering the issues of oblique UAV images, the SIFT algorithm extracts local features. In this study, to balance the accuracy and efficiency of subsequent match pair selection, 8,129 local features with the highest scales are extracted for each image, and the feature descriptors are represented as a vector with dimension = 128.
By using extracted local features, a codebook can be generated for the aggregation of local features to the VLAD descriptor. In general, there are two ways to generate a codebook, i.e., one for online generation for each dataset and the other for offline generation for all datasets. While the second way accelerates online processing without training an individual codebook, it cannot represent the characteristics of specified datasets and provide inferior performance on image retrieval. Therefore, the optimal way is to generate a codebook for an individual UAV dataset <cit.>. However, it would be very time-consuming to generate a codebook because the large data volume and high spatial resolution of UAV images cause many descriptors. For UAV images, there are two kinds of redundancy. The first is the image redundancy due to the high overlap degree to ensure the success of subsequent image orientation; the second is the feature redundancy because of the high spatial resolution of UAV images. These two kinds of redundancy could be exploited to reduce the descriptor number in codebook training. On the one hand, the number of visual words for VLAD aggregation is extremely less than that for BoW indexing <cit.>. A very coarse quantization of the descriptor space is required for VLAD aggregation. On the other hand, the characteristics of one image can be represented by a subset of features with large scales. Thus, this study proposes a random sampling strategy to select a subset p of training images and a scale restriction strategy to select a subset h of descriptors with large scales. Based on the work <cit.>, the parameter p and h are set as 20% and 1500.
After selecting the training descriptors, the codebook with k clusters is generated by using the K-means clustering algorithm <cit.>: 1) pick k cluster centers randomly; 2) assign each descriptor to its nearest cluster center; 3) calculate the mean vector of each cluster and use it as the new cluster centers; 4) repeat steps 2) to 3) after a certain number of iteration times or reach the convergence condition of the algorithm. Based on the clustering algorithm, the k cluster centers indicate the codebook C={c_1,c_2,c_3,...,c_k}. The number of cluster centers k is closely related to the performance of the match pair retrieval algorithm. On the one hand, the accuracy of match pair retrieval will be reduced when k is too small; on the other hand, the generation of the codebook will consume more memory, and the efficiency of subsequent feature aggregation and image retrieval will be reduced when k is too large. Thus, a proper k is significant for match pair retrieval.
§.§ Adaptive match pair retrieval via global descriptor and graph indexing
§.§.§ Global descriptor from the aggregation of local features
Some solutions are designed for aggregating local features to global vectors, e.g., the BoW that counts the term frequency of words. However, the number of words in the trained codebook should increase simultaneously with the number of involved images. It would cause high time costs for large-scale image indexing. Instead of the term frequency counting, VLAD accumulates residuals between local feature descriptors and their corresponding cluster centers and achieves high discriminative power using a very small-size codebook. Based on the observation, this study uses VLAD to aggregate local features into global descriptors <cit.>.
For N extracted local features of an image, the VLAD descriptor is obtained by iterating feature descriptors assigned to the same cluster center and calculating the sum of the residuals between these feature descriptors and the cluster center. The final VLAD descriptor is a concatenation of residual vectors generated from all cluster centers. Supposing that there are k cluster centers in the trained codebook C, the VLAD descriptor v consists of k vectors with the same dimension d=128 as the used SIFT descriptor. Therefore, the calculation of an element v_k,j in the VLAD descriptor v is presented by Equation <ref>
v_k,j=∑_i=1^Na_k(d_i)(d_i(j)-c_k(j))
where j is the dimension index of feature descriptors, i.e., j=1,2,...,d; a_k(d_i) is an indicator function: when the feature descriptor d_i belongs to the visual word c_k, a_k(d_i)=1; otherwise, a_k(d_i)=0. Based on the formulation, an image is represented as a k× d VLAD descriptor. Compared with the BoW vector, the VLAD descriptor uses the residual vector to encode the input image. In order to generate the same dimension feature vector, extremely fewer visual words are required in the trained codebook, i.e., the ratio is the same as the dimension d=128 of the used descriptors. Besides, component-wise and global L2-normalization is sequentially conducted for the generated VLAD descriptors. Noticeably, the VLAD aggregation can be executed parallelly because it is independent for each clustering center.
§.§.§ Match pair retrieval based on Graph-indexed global descriptors
Match pairs can be selected by the nearest neighbor searching between VLAD descriptors. Recently, graph-based solutions have attracted enough attention because of their high precision and promising efficiency when dealing with high-dimension descriptors. HNSW <cit.> is one of the well-known graph-based search algorithms, which is implemented based on the NSW (Navigable Small World) search method <cit.>. HNSW uses a hierarchical structure to build a vector index graph to increase retrieval efficiency, miming a coarse-to-fine searching strategy. The bottom layer includes all vertices, and the number of vertices decreases gradually from the bottom to up layers. In the retrieval stage, after the entry of the query vector, the HNSW index is used to search from top to bottom, which restricts the searching of the next nearest neighbor to the child nodes in the next layer. The nearest neighbors in the bottom layers are the retrieval results. Thus, HNSW is used in this study for high-dimension multi-VLAD vector indexing and match pair retrieval. The VLAD descriptors are first constructed into a graph structure G={V, E}, in which V and E respectively represent the vertex set composed of VLAD descriptors and the edge set composed of their connection relationships. To achieve efficient indexing and retrieval, the maximum number of connections for each vertex is restricted to M, termed the friend number. This parameter M influences the efficiency and precision of image retrieval.
In match pair retrieval, the number of returned items should be specified well. The optimal value should adapt to the data acquisition configuration, mainly affected by the image overlap degree. It varies for each data acquisition and each UAV image. However, it is usually set as a fixed number or ratio in the classical image retrieval pipeline. In this study, an adaptive selection strategy has been adopted to select the number of retrieved images <cit.>. The core idea origins from the fact that images with larger overlap areas have higher similarity scores, and the similarity scores decrease dramatically with the decrease of overlap areas. However, image pairs without overlap areas have very small similarity scores, and at the same time, no obvious changes are observed from similarity scores, as illustrated in Figure <ref>. Thus, the distribution of similarity scores is fitted well by using a power function with coefficients a and b, as presented by Equation <ref>
y=a^*x^b
where x and y indicate the image ids and similarity scores, respectively. Using the mean μ and standard deviation δ of similarity scores between one query and database images, a horizontal separation line y=μ+kδ can be defined, and database images with similarity scores above the separation line are labeled as the retrieval results. Noticeably, in the HNSW-based image retrieval, the Euclidean distance instead of the similarity score has been returned. In this study, inverse linear normalization is used to calculate similarity scores. Suppose that m items are retrieved with distance D={d_1,d_2,d_3,...,d_m}, the similarity score is calculated based on Equation <ref>
s_i=d_max-d_i/d_max-d_min
where d_min and d_max indicate the minimal and maximal values in D, respectively. Thus, this equation converts the Euclidean distance to the similarity score that ranges from 0 to 1. Besides, the separation line y=μ+kδ is mainly influenced by the mean μ and standard deviation δ. With the increase of used samples to fit the power function, the separation line y would go down and retain more retrieved results. Thus, according to practical experiences, the number of used samples is set as 300 in this study.
§.§ View graph construction from retrieved match pairs
False match pairs inevitably exist because of repetitive image patterns and non-optimal parameters in image retrieval. In this study, local feature matching and geometric verification are conducted to filter false matches. Guided by initial match pairs, local feature matching is performed by finding the nearest neighbors from two sets of features based on the Euclidean distance between feature descriptors, in which the cross-checking and ratio test have also been utilized. To further refine the initial matches, the epipolar geometry based on the Fundamental matrix is utilized to remove false matches, which can be robustly estimated in the RANSAC (Random Sampling Consensus) framework <cit.>. Finally, the match pairs with the number of refined matches greater than 15 are retained.
A view graph can be created using the retained match pairs and their feature matches. In this study, the view graph is represented as an undirected weighted graph G={V, E}, in which V and E indicate the vertex set and edge set, respectively <cit.>. Suppose that I={i_i} and P={p_ij} are respectively n images and m match pairs. The graph G is constructed as follows: a vertex v_i is added for each image i_i and all vertices form the vertex set V={v_i}; adding an edge e_ij connecting vertex v_i and vertex v_j for each matched pair p_ij and all edges form the edge set E={e_ij}. To quantify the importance of match pairs, an edge weight w_ij is assigned to the edge e_ij. In the context of SfM-based image orientation, the number of feature matches and their distribution over image planes directedly influence the overall performance. Thus, w_ij is calculated by Equation <ref>
w_ij=R_ew× w_inlier+(1-R_ew)× w_overlap
where R_ew is the weight ratio between w_inlier and w_overlap, which is set as 0.5 similar to the work in <cit.>. w_inlier is the weight item related to the number of feature matches; w_overlap is the weight item related to the distribution of feature matches. These two items are calculated respectively according to Equations <ref> and <ref>
w_inlier = log(N_inlier)/log(N_max_inlier)
w_overlap=CH_i+CH_j/A_i+A_j
where N_inlier and N_max_inlier indicate the number of matched correspondences of the match pair and the maximum number of matched correspondences among all match pairs; CH_i and CH_j represent the convex hull areas of feature matches over two images; A_i and A_j represent the areas of two image planes. In our study, the Graham-Andrew algorithm <cit.> is used to detect convex hulls of feature matches.
§.§ Parallel SfM reconstruction guided by view graph
In this study, an incremental SfM is used to estimate camera poses and scene structures. Incremental SfM, however, suffers the problem of low efficiency due to the sequential registering of images and iterative local and global bundle adjustment. For large-scale scenes, this issue becomes very obvious and limits the applications of SfM in recent photogrammetric systems. To overcome the problem, this study adopts the divide-and-conquer strategy to split the large-size reconstruction into small-size sub-reconstructions. Thus, sub-reconstructions can be well addressed, and parallel techniques can also be utilized to improve efficiency. Figure <ref> illustrates the basic principle of the designed parallel SfM solution <cit.>, which includes four major steps described as follows:
* First, after creating the view graph G, the scene is divided into small-size clusters {G_i} with strong inner connections. The scene clustering is implemented through the NC (Normalized Cut) algorithm <cit.>, which removes the edges with smaller weights and ensures the good connection of vertices in each cluster.
* Second, an incremental SfM engine is then executed parallelly for each cluster G_i, which generates an individual model for each cluster. In this study, the well-known incremental SfM engine, COLMAP <cit.>, has been utilized to implement the parallel reconstruction of each cluster.
* Third, cluster merging is performed by iteratively merging two sub-models, which convert individual models to an entire model in the same global coordinate system. In this step, the merging order is critical as it affects the robustness and precision of cluster merging. In this study, the number of common 3D points between models is used to sort the merging order, which has been calculated efficiently through a corresponding graph established between two clusters <cit.>.
* Finally, a final global bundle adjustment is executed for the merged global model. Since the number of optimization parameters would be very large, a tie-point selection strategy is adopted to decrease the number of 3D points in BA optimization. As documented in <cit.>, tie-point selection is achieved based on four metrics, i.e., re-projection error, overlap degree, image coverage, and number limitation.
§.§ Algorithm implementation
This study implements the solution of match pair retrieval and parallel SfM reconstruction using the C++ programming language, as presented in Algorithm <ref>. In detail, for feature extraction, the SIFTGPU <cit.> library is used with default parameter setting; for the generation of the codebook, the Lloyd’s K-means cluster algorithm <cit.> has been used; in addition, we have implemented an algorithm for the aggregation of SIFT features into VLAD descriptors and adopted the HNSW algorithm in the FAISS package <cit.> for graph indexing; based on our previous work <cit.>, we have embedded the match pair retrieval and view graph construction method into the parallel SfM workflow, in which the software package ColMap <cit.> has been selected as the incremental SfM engine.
§ EXPERIMENTS AND RESULTS
In the experiment, three UAV datasets have been collected to evaluate the performance of the proposed solution. First, according to the efficiency and precision of match pair selection, we analyze the influence of key parameters, i.e., the number of cluster centers k for the codebook generation and the maximum number of neighboring vertices M in HNSW. Second, we conduct the match pair selection and SfM-based 3D reconstruction of the three UAV datasets using the selected parameter setting. Third, we compared the proposed SfM solution with four well-known software packages, i.e., two open-source software packages ColMap <cit.> and DboW2 <cit.> and two commercial software packages Agisoft Metashape and Pix4Dmapper, to evaluate the performance of match pair selection and SfM reconstruction. In the study, all experiments are executed on a Windows desktop computer with 64 GB memory, four Intel 2.40 GHz Xeon E5-2680 CPUs, and one 10 GB NVIDIA GeForce RTX 3080 graphics card.
§.§ Test sites and datasets
Three UAV datasets with different sizes are used for the performance evaluation. Figure <ref> shows the sample images in each dataset, and the detailed information is listed in Table <ref>. The description of each dataset is presented as follows:
* The first dataset consists of 3,743 images taken from a university campus covered by dense and low-rise buildings. The dataset is captured by a DJI Phantom 4 RTK UAV equipped with one DJI FC6310R camera. The images with 5,472 by 3,648 pixels are collected under the flight height of 80 m, and the GSD (Ground Sample Distance) is approximately 2.6 cm.
* The second dataset includes 4,030 images taken from a complex university building. It is captured using a DJI M300 RTK UAV equipped with one DJI Zenmuse P1 camera with a dimension of 8,192 by 5,460 pixels. It is worth mentioning that this dataset has been collected based on the optimized views photogrammetric <cit.>, which adjusts camera viewpoints and directions according to the geometry of ground objects. The GSD is approximately 1.2 cm. For absolute orientation, 26 GCPs (Ground Control Points) were collected using a total station, whose nominal accuracy is about 0.8 and 1.5 cm in the horizontal and vertical directions.
* The third dataset is recorded by a penta-view oblique photogrammetric instrument equipped with five SONY ILCE 7R cameras with 6,000 by 4,000 pixels. Low-rise buildings and dense vegetation mainly cover this test site. In addition, a rive comes across the test site. Under the flight height of 87.1 m, a total number of 21,654 images has been collected with a GSD of 1.21 cm.
§.§ The influence of parameters K and M
For the proposed match pair retrieval solution, two critical parameters directly influence the efficiency and precision of image indexing and retrieval, i.e., the visual word number k in the generation of the trained codebook and the friend number M in the graph-based indexing. The former determines the dimension of the VLAD vectors; the latter determines the maximum number of connections of each vertex to others in the HNSW graph. Thus, this section analyzes their influence on retrieval efficiency and precision.
For the evaluation, dataset 1 has been selected, and two metrics are used for performance evaluation: retrieval efficiency and precision. The retrieval efficiency is the total time costs consumed in match pair selection; the retrieval precision is calculated as the ratio between the number of correct match pairs and the number of all match pairs. In this test, the retrieval time includes time costs in VLAD-based feature aggregation, HNSW-based graph construction, and image retrieval. To avoid the influence of the adaptive selection, the retrieval number is fixed as 30, and match pairs with at least 15 true matches are defined as positive results.
For the analysis of the parameter k, the values of 32, 64, 128, 256, 512, and 1024 are tested. Figure <ref> presents the statistical results of efficiency and precision in the match pair selection, in which Figure <ref> and Figure <ref> respectively indicate the efficiency and precision. It is clearly shown that with the increase of k, the time costs increase exponentially, from 45.7 seconds to 175.5 seconds, with the value ranging from 32 to 1024, respectively. The main reason is that a larger k leads to more time costs in the nearest cluster center searching for VLAD feature aggregation and increases the dimension of generated VLAD descriptors, which further poses a burden in HNSW graph indexing and retrieval. On the contrary, we can observe that the retrieval precision increases linearly with the increase of the parameter k, which increases from 0.81 to 0.94 within the specified span. To balance efficiency and precision, the parameter k is set as 256 in the following tests.
For the analysis of the parameter M, the values of 6, 8, 10, 12, 16, 32, and 64 are used, and the statistical results are presented in Figure <ref>. We can see that: (1) the changing trend of retrieval efficiency in Figure <ref> can be divided into two parts. In the first part, the retrieval efficiency is almost constant with the value M increasing from 6 to 16; in the second part, the retrieval efficiency decreases dramatically with the value M increasing from 16 to 64; (2) the changing trend of retrieval precision in Figure <ref> can be separated into three stages. In the first stage, the retrieval precision increases obviously with the value M increasing from 6 to 8; in the second stage, the retrieval precision keeps constant within the value range from 8 to 16; in the third stage, the retrieval precision decreases gradually within the value range from 16 to 64. It is worth mentioning that k has a greater impact on retrieval efficiency than M because most time costs are spent in VLAD aggregation. Besides, M affects the number of valid NN neighbors that can be retrieved. Considering that at least 300 valid NN neighbors should be retrieved in the adaptive selection, the parameter M is set as 32 in the following tests.
§.§ Match pairs selection and 3D reconstruction
§.§.§ Match pairs selection by the proposed retrieval method
By using the selected parameters k and M, the performance of match pair selection is first evaluated. Similarly, retrieval efficiency and precision are used as the metrics for performance evaluation. Table <ref> lists the statistical results of match pair selection. It is clearly shown that high retrieval precision has been achieved for the three datasets, which are 90.1%, 89.9%, and 94.4% for the three datasets, respectively. It ensures that a very large proportion of selected match pairs are overlapped images. Figure <ref> shows the results of our method to retrieve similar images for two sample images from datasets 1 and 3. It can be seen that all the retrieved images are true positive results. In addition, the time costs of match pair selection are 2.5 mins, 2.6 mins, and 12.4 mins for the three datasets, respectively, which achieves the average time costs of approximately 0.040 secs, 0.039 secs, and 0.034 secs for match pair selection. Thus, we can conclude that the proposed solution can achieve linear time complexity in image indexing and retrieval and process large-scale UAV datasets for efficient match pair selection.
§.§.§ Parallel 3D reconstruction guided by the weighted view graph
The selected match pairs are then used to guide feature matching. In this study, feature matching is achieved by searching approximate nearest neighbors, refined based on the widely used ratio test and cross-checking. The initial matches are then verified by the epipolar constraint implemented by the estimation of the fundamental matrix within the framework of RANSAC. In this study, the threshold of ratio-test is set as 0.8 as the default value in the SIFTGPU library, and the maximum distance threshold is configured as 1.0 pixels to ensure the high inlier ratio of feature matching. Using feature matching results, a view graph represented as an undirected weighted graph can be constructed for each dataset, whose vertices and edges represent images and their connection relationships, respectively. As presented in Figure <ref>, three view graphs are created for the three UAV datasets, in which vertices and edges are rendered by red dots and gray lines, respectively. It is shown that there are 59,014, 65,743, and 353,005 match pairs selected from the three datasets, respectively. The dense edges between vertices indicate a strong connection between images, which ensures the success of SfM-based image orientation.
To achieve the parallel SfM reconstruction, the entire view graph is then divided into small sub-clusters with strong inner-edge connections. In the proposed parallel SfM workflow, the normalized cut algorithm is utilized for scene clustering, and the largest size of each sub-cluster is set as 500. The scene partition results are illustrated in Figure <ref>, Figure <ref>, and Figure <ref>. We can see 8, 9, and 44 sub-clusters generated for the three datasets. Each cluster is represented by an identical color, which verifies the compact connections within each cluster. Based on the sub-clusters, parallel SfM is executed to create the sub-reconstructions that are finally merged into the entire reconstruction. Table <ref> shows the statistical results of 3D reconstruction, in which the metrics precision and completeness refer to the re-projection error of BA optimization and the numbers of oriented images and reconstructed 3D points. We can see that the precision of the three datasets are 0.542 pixels, 0.668 pixels, and 0.752 pixels, respectively, and almost all images are oriented successfully, whose numbers are 3,724, 4,029, and 21,568, respectively. For the visualization, Figure <ref>, Figure <ref>, and Figure <ref> shows the reconstructed 3D points from the three datasets. It is shown that the reconstructed 3D points can cover the whole test site. Thus, the proposed solution can create stable view graphs to achieve parallel SfM.
§.§ Performance comparison with the other software packages
§.§.§ Match pair selection
The proposed solution is compared with the BoW retrieval method in ColMap and the Dbow2 retrieval method to evaluate the performance in match pair selection. The statistic result is presented in Figure <ref>. It is clearly shown that compared with BoW and Dbow2, the proposed solution achieves the highest efficiency, whose time costs are 2.5 min, 2.6 min, and 12.4 min for the three datasets. Especially for dataset 3, the time costs of Bow and Dbow2 reach 1335.5 mins and 2848.3 mins, respectively, which is unacceptable in practice. By observing the results presented in Figure <ref>, we can see that BoW almost achieves the highest precision, which is 90.3%, 92.1%, and 97.6% for the three datasets, respectively. The proposed solution ranks second with a precision of 90.1%, 89.9%, and 94.4% for the three datasets, which are higher than Dbow2. In conclusion, compared with BoW, the proposed solution can achieve comparable precision with the speedup ratios ranging from 36 to 108 for the three UAV datasets.
§.§.§ SfM-based reconstruction
To evaluate the performance in the workflow of SfM-based reconstruction, the proposed solution is further compared with two commercial software packages Agisoft Metashape and Pix4Dmapper. Agisoft Metashape uses multi-scale matching and GNSS data for match pair selection; Pix4Dmapper provides a vocabulary tree-based image retrieval. In this test, camera intrinsic parameters are calibrated and fixed in SfM, and the match pairs selected from Bow and Dbow2 are fed into the proposed parallel SfM for reconstruction. Besides, 26 GCPs in the second dataset are used to evaluate geo-referencing accuracy. In the following tests, the metric efficiency indicates the time costs in SfM reconstruction without feature matching.
Table <ref> presents the statistical results of SfM reconstruction without GCPs. It is shown that BoW, Dbow2, and the proposed solution have almost the same efficiency because of using the same SfM engine. Although Metashape and Pix4Dmapper can achieve the reconstruction of datasets 1 and 2, their efficiency is lower, which further verifies the advantage of the parallel SfM workflow. Noticeably, Metashape and Pix4Dmapper fail to reconstruct dataset 3 since the large data volume causes the out-of-memory error in reconstruction. Considering the metric precision, it is shown that Pix4Dmapper achieves the highest performance, which BoW, Dbow2, and the proposed solution follow. For metric completeness, we can see that comparable performance can be observed from the evaluated software packages except for Pix4Dmapper. This is mainly caused by the relatively low precision of image retrieval.
Absolute bundle adjustment with GCPs is further executed to evaluate the geo-referencing accuracy of reconstructed models. In this test, three GCPs that are evenly distributed over test site 2 are utilized for the geo-referencing of SfM reconstructed models, and the others are used as check points (CPs). For the performance evaluation, two metrics, i.e., mean and std.dev. of CPs residuals are used in this test. In addition, Pix4dMapper has been selected as a baseline for commercial software packages.
Table <ref> presents the statistical results of absolute BA. It is shown that among all evaluated software packages, Pix4dMapper achieves the highest accuracy with the std.dev. of 0.013 cm, 0.016 cm, and 0.019 cm in the X, Y, and Z directions, respectively. Although BoW ranks second in the vertical direction with the std.dev. of 0.036 cm, its horizontal accuracy is lower than the proposed solution with the std.dev. of 0.029 cm and 0.026 cm in the X and Y directions, respectively, which can also be verified by the residual plot presented in Figure <ref> and Figure <ref>. Due to the low precision of match pair selection, the geo-referencing accuracy of Dbow2 is the lowest in the X and Z directions, as shown in Figure <ref> and Figure <ref>. Thus, we can conclude that the proposed solution can provide necessary and accurate match pairs to achieve reliable SfM reconstruction with obviously high efficiency.
§ CONCLUSIONS
In this paper, we proposed a workflow that integrates match pair retrieval and parallel SfM reconstruction to achieve the efficient and accurate 3D reconstruction of large-scale UAV images. The core idea of match pair selection is to aggregate many local features into high-dimensional global vectors that can then be indexed through a graph-based structure for efficient ANN searching. Guided by the selected match pairs, a weighted view graph is created to achieve the parallel SfM through graph clustering and sub-model merging. The tests demonstrate that the proposed workflow can significantly accelerate match pair selection with a speedup ratio of tens and hundreds of times and increase the efficiency of SfM-based reconstruction with comparative results.
In this study, some observations and possible limitations have also been observed. First, the precision of match pair selection is dramatically influenced by the number of words in the codebook generated through K-means clustering, as shown in Section <ref>. At the same time, a large K would also decrease the image retrieval efficiency. Thus, it is non-trivial to trade precision and efficiency, especially for large-scale datasets. Second, the hand-crafted local features, i.e., SIFT, are adopted for image retrieval of their high tolerance to scale and viewpoint changes. However, deep learning-based feature detectors have attracted enough attention in the fields of image retrieval <cit.> and feature matching <cit.> due to the excellent ability of representation learning. Therefore, it is rational to use learned descriptors to enhance the image retrieval and feature matching algorithm in the proposed workflow. Third, only the CPU is used in the implemented algorithm, which can be further accelerated using the GPU parallel computing technique. In future research, we will conduct more tests on selecting high-quality match pairs with high efficiency by exploiting learned feature descriptors and the GPU acceleration technique.
§ ACKNOWLEDGMENTS
This research was funded by the National Natural Science Foundation of China (Grant No. 42001413), the Open Research Fund from the Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ) (Grant No. GML-KF-22-08), the Open Research Project of The Hubei Key Laboratory of Intelligent Geo-Information Processing (Grant No. KLIGIP-2021B11), and the Provincial Natural Science Foundation of Hunan (Grant No. 2023JJ30232).
IEEEtran
*
|
http://arxiv.org/abs/2307.05368v2 | 20230711154734 | TESS discovery of a super-Earth orbiting the M dwarf star TOI-1680 | [
"M. Ghachoui",
"A. Soubkiou",
"R. D. Wells",
"B. V. Rackham",
"A. H. M. J. Triaud",
"D. Sebastian",
"S. Giacalone",
"K. G. Stassun",
"D. R. Ciardi",
"K. A. Collins",
"A. Liu",
"Y. Gómez Maqueo Chew",
"M. Gillon",
"Z. Benkhaldoun",
"L. Delrez",
"J. D. Eastman",
"O. Demangeon",
"K. Barkaoui",
"A. Burdanov",
"B. -O. Demory",
"J. de Wit",
"G. Dransfield",
"E. Ducrot",
"L. Garcia",
"M. A. Gómez-Muñoz",
"M. J. Hooton",
"E. Jehin",
"C. A. Murray",
"P. P. Pedersen",
"F. J. Pozuelos",
"D. Queloz",
"L. Sabin",
"N. Schanche",
"M. Timmermans",
"E. J. Gonzales",
"C. D. Dressing",
"C. Aganze",
"A. J. Burgasser",
"R. Gerasimov",
"C. Hsu",
"C. A. Theissen",
"D. Charbonneau",
"J. M. Jenkins",
"D. W. Latham",
"G. Ricker",
"S. Seager",
"A. Shporer",
"J. D. Twicken",
"R. Vanderspek",
"J. N. Winn",
"K. I. Collins",
"A. Fukui",
"T. Gan",
"N. Narita",
"R. P. Schwarz"
] | astro-ph.EP | [
"astro-ph.EP"
] |
Astrobiology Research Unit, Université de Liège, Allée du 6 Août 19C, B-4000 Liège, Belgium
Oukaimeden Observatory, High Energy Physics and Astrophysics Laboratory, Cadi Ayyad University, Marrakech, Morocco
Departamento de Fisica e Astronomia, Faculdade de Ciencias, Universidade do Porto, Rua do Campo Alegre, 4169-007 Porto, Portugal
Instituto de Astrofisica e Ciencias do Espaco, Universidade do Porto, CAUP, Rua das Estrelas, 150-762 Porto, Portugal
Center for Space and Habitability, University of Bern, Gesellschaftsstrasse 6, CH-3012, Bern, Switzerland
Department of Earth, Atmospheric and Planetary Science, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA
Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
51 Pegasi b Fellow
School of Physics & Astronomy, University of Birmingham, Edgbaston, Birmimgham B15 2TT, UK
Department of Astronomy, University of California Berkeley, Berkeley, CA 94720, USA
Department of Physics and Astronomy, Vanderbilt University, Nashville, TN 37235, USA
NASA Exoplanet Science Institute-Caltech/IPAC, 1200 E. California Blvd, Pasadena, CA 91125 USA
Center for Astrophysics Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
Universidad Nacional Autónoma de México, Instituto de Astronomía, AP 70-264, Ciudad de México, 04510, México
Space Sciences, Technologies and Astrophysics Research (STAR) Institute, Université de Liège, Allé du 6 Août 19C, B-4000 Liège, Belgium
Instituto de Astrofísica de Canarias (IAC), Calle Vía Láctea s/n, 38200, La Laguna, Tenerife, Spain
AIM, CEA, CNRS, Université Paris-Saclay, Université de Paris, F91191 Gif-sur-Yvette, France
Paris Region Fellow, Marie Sklodowska-Curie Action
Universidad Nacional Autónoma de México, Instituto de Astronomía, AP 106, Ensenada 22800, BC, México
Cavendish Laboratory, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
Instituto de Astrofísica de Andalucía (IAA-CSIC), Glorieta de la Astronomía s/n, 18008 Granada, Spain
Department of Astronomy and Astrophysics, University of California, Santa Cruz, 1156 High St. Santa Cruz, CA 95064, USA
Center for Astrophysics and Space Science, University of California San Diego, La Jolla, CA 92093, USA
Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA), Northwestern University, 1800 Sherman, Evanston, IL 60201, USA
NASA Ames Research Center, Moffett Field, CA 94035, USA
Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Department of Aeronautics and Astronautics, MIT, 77 Massachusetts Avenue, Cambridge, MA 02139, USA
Department of Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
SETI Institute, Mountain View, CA 94043, USA
Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA
George Mason University, 4400 University Drive, Fairfax, VA 22030, USA
Komaba Institute for Science, The University of Tokyo, 3-8-1 Komaba, Meguro, Tokyo 153-8902, Japan
Instituto de Astrofisica de Canarias (IAC), 38205 La Laguna, Tenerife, Spain
Department of Astronomy and Tsinghua Centre for Astrophysics, Tsinghua University, Beijing 100084, China
Astrobiology Center, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
discovery of a super-Earth orbiting the M-dwarf star TOI-1680 Ghachoui et al.
We report the discovery by the mission of a super-Earth on a 4.8-d orbit around an inactive M4.5 dwarf (), validated by ground-based facilities. The host star is located 37.14 pc away, with a radius of 0.2100±0.0064 , mass of 0.1800±0.0044 , and an effective temperature of 3211±100 K. We validated and characterized the planet using data, ground-based multi-wavelength photometry from TRAPPIST, SPECULOOS, and LCO, as well as high-resolution AO observations from Keck/NIRC2 and Shane. Our analyses have determined the following parameters for the planet: a radius of 1.466^+0.063_-0.049 and an equilibrium temperature of 404±14 K, assuming no albedo and perfect heat redistribution. Assuming a mass based on mass-radius relations, this planet is a promising target for atmospheric characterization with the James Webb Space Telescope ().
discovery of a super-Earth orbiting the M-dwarf star TOI-1680
M. Ghachoui<ref>,<ref>
A. Soubkiou <ref>,<ref>,<ref>
R.D. Wells <ref>
B.V. Rackham <ref>, <ref>, 51 Pegasi b Fellow
A. H.M.J. Triaud <ref>
D. Sebastian <ref>
S. Giacalone <ref>
K.G. Stassun <ref>
D.R. Ciardi <ref>
K.A. Collins <ref>
A. Liu <ref>
Y. Gómez Maqueo Chew <ref>
M. Gillon <ref>
Z. Benkhaldoun <ref>
L. Delrez <ref>,<ref>
J.D. Eastman <ref>
O. Demangeon <ref>,<ref>
K. Barkaoui <ref>,<ref>,<ref>
A. Burdanov <ref>
B.-O. Demory <ref>
J. de Wit <ref>
G. Dransfield <ref>
E. Ducrot <ref>,<ref>
L. Garcia <ref>
M.A. Gómez-Muñoz <ref>
M.J. Hooton <ref>
E. Jehin <ref>
C.A. Murray <ref>
P.P. Pedersen <ref>
F.J. Pozuelos <ref>
D. Queloz <ref>
L. Sabin <ref>
N. Schanche <ref>
M. Timmermans <ref>
E.J. Gonzales <ref>
C.D. Dressing <ref>
C. Aganze <ref>
A.J. Burgasser <ref>
R. Gerasimov <ref>
C. Hsu <ref>,<ref>
C.A. Theissen <ref>
D. Charbonneau <ref>
J.M. Jenkins <ref>
D.W. Latham <ref>
G. Ricker <ref>,<ref>
S. Seager <ref>,<ref>,<ref>
A. Shporer <ref>
J.D. Twicken <ref>
R. Vanderspek <ref>
J.N. Winn <ref>
K.I. Collins <ref>
A. Fukui <ref>,<ref>
T. Gan <ref>
N. Narita <ref>,<ref>,<ref>
R.P. Schwarz <ref>
Received / Accepted
============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The science of exoplanets has dramatically flourished in the last decade, especially thanks to dedicated space missions. Following the completion of NASA's Kepler mission survey, where it was revealed that small transiting planets with sizes between those of Earth and Neptune (i.e., 1 < R_p < 4) are common in close-in orbits around other stars <cit.>, the Transiting Exoplanet Survey Satellite <cit.> mission took over to search for such planets orbiting bright and nearby stars <cit.>. This mission concept was chosen for easy subsequent spectroscopic investigation of the planets' masses and atmospheres, notably with the James Webb Space Telescope (JWST) <cit.>. observed 85% of the sky in its nominal mission and is now in its extended mission <cit.>. Up to now, has detected more than 6000 planet candidates (TESS Objects of Interest, TOIs), including more than 1300 that could be smaller than 4 .
The exploration of planets larger than Earth and smaller than Neptune is an area of great interest. Since such planets are not present in our Solar System, our understanding of their origins and formation mechanisms is limited. Interestingly, demographic studies performed by <cit.> on the basis of California–Kepler Survey exoplanets sample –a subset of transiting planets from Kepler with high-resolution spectroscopic follow-up of their host stars <cit.>– uncovered a gap, usually known as the “radius valley,” in the radius distribution of small planets in close orbits (<100 days) around FGK stars. This radius valley separates super-Earths and sub-Neptunes. This finding presents a key phenomenon for understanding planet formation mechanisms.
Two main theories have been proposed to explain the radius valley: thermally driven mass-loss <cit.> and gas-poor formation <cit.>. Each of the two predict different origins for the radius valley. Moreover, a recent study conducted by <cit.> on a sample of 34 well-characterized exoplanets around M dwarfs has instead indicated the presence of a density gap separating rocky and water-rich exoplanets. However, the small size of exoplanet samples used in these studies precludes definitive constraints. Thus, having a significantly large sample of exoplanets with accurate density estimates is strongly needed.
In this paper, we present the discovery and characterization of a super-Earth planet (1.466^+0.063_-0.049) which was first discovered by to orbit an M-dwarf star located near the continuous viewing zone (CVZ) of . We validate its planetary nature using ground-based observations, including time-series photometry, high-angular-resolution imaging and spectroscopy. Although we do not present a mass measurement in this paper, this could be done with high-precision radial velocity observation, as discussed in Sect. <ref>. This measurement would allow for detailed studies on planet formation in the future.
The paper is structured as follows. Section. <ref> presents the data from and all ground-based observations. Stellar characterization, validation of the transit signals, and transit analyses are presented in Sect. <ref>. We discuss our findings in Sect. <ref> and give our conclusions in Sect. <ref>.
§ OBSERVATIONS
In this section, we present all the observations of obtained with and ground-based facilities. Table <ref> summarizes all the ground-based, time-series photometric observations.
§.§ photometry
Over its two-year primary mission, <cit.> performed an all-sky survey in a series of contiguous overlapping 96 × 24 deg sectors, each observed for 27 days. Depending on the ecliptic latitude, the overlapping regions of the sectors were observed for up to ∼351 days. Given its high ecliptic latitude (β = +81.05 deg), (TIC 259168516) is well placed in the CVZ. It was then observed by in all the northern sectors (from 14 to 26) in the second year of primary mission, from 18 July 2019 to 4 July 2020. It was also observed in the extended mission in sectors 40-41 from 25 June to 20 August 2021. Most recently, it was observed in sectors 47–59 from 31 December 2021 to 23 December 2022. The target pixel files (TPFs) and simple aperture photometry (SAP) apertures used in each sector are shown in Fig. <ref>, along with the superplotted locations of nearby Gaia DR2 <cit.> sources. The astrometric and photometric properties of from the literature are reported in Table <ref>. The time series observations were processed in the Science Processing Operations Center (SPOC) pipeline, originally developed for the Kepler mission at NASA Ames Research Center <cit.>. The SPOC pipeline conducted a transit search of the combined light curve from sectors 14-16 on 26 October 2019 with an adaptive, noise-compensating matched filter <cit.>, producing a threshold crossing event (TCE) with 4.8 day period for which an initial limb-darkened transit model was fitted <cit.> and a suite of diagnostic tests were conducted to help make or break the planetary nature of the signal <cit.>. The 5.1 ppt transit signature passed all diagnostic tests presented in the SPOC data validation reports, and the source of the transit signal was localized within 4.03±4.58 . The TESS Science Office (TSO) reviewed the vetting information and issued an alert for b on 30 January 2020 <cit.>.
For subsequent analysis, we retrieved the 2-minute presearch data conditioning light curves (PDC-SAP, ) from the Mikulski Archive for Space Telescopes (MAST). We were limited to sectors 13–26 of the primary mission and sectors 47–50 of the extended mission. We removed all the bad data points flagged as "bad quality." We then detrended the light curves to remove stellar variability using a biweight time-windowed slider via <cit.>. We excluded the transit signal by applying a filter window that is three times longer than the transit duration of 71.150_-0.936^+0.993 minutes.
§.§ Ground-based photometry
The pixel scale of spacecraft is 21 per pixel <cit.>. A targeted star might not be alone in a single pixel. Other stars in the same pixel might be suspected to be the source of the detection. Even if the transit signal is on target, the depth might appear shallower because of the contaminating nearby stars. To confirm the signal on target and validate its planetary nature, a series of precise ground-based observations were collected using five observatories as part of the Follow-up Observing Program
(TFOP[<https://tess.mit.edu/followup>]). We made use of Transit Finder () tool, which is a customized version of the software package <cit.>, to schedule our observations described hereafter.
§.§.§ LCOGT 1m
The first two full transits of b were observed from Las Cumbres Observatory Global Telescope (LCOGT; ) 1.0-meter network node at McDonald Observatory. The 1-meter telescopes are equipped with 4096×4096 pixels SINISTRO cameras having a pixel scale of 0.389 per pixel, offering a field of view of 26'×26'. The first transit was observed on 13 June 2020 in the Ic band for 210 min, over which we gathered 64 images with an exposure time of 150 seconds. The second transit was observed on 07 July 2020 in Sloan i' band during an observational window of 198 min, where we collected 63 images with an exposure time of 150 seconds. The data reduction and photometric data extraction were performed using the <cit.> software package with an uncontaminated aperture of 8.0 pixels (3.11) for both observations.
§.§.§ TRAPPIST-North photometry
We observed a partial and a full transit of b with the 0.6-meter TRAPPIST-North telescope located at Oukaimeden Observatory in Morocco () on 22 June and 16 July 2021, respectively. TRAPPIST-North is equipped with a thermoelectrically cooled 2K×2K Andor iKon-L BEX2-DD CCD camera with a pixel scale of 0.6per pixel, offering a field of view of 20'×20'. Both observations were performed in Sloan z' band with an exposure time of 80 s. The first observation consisted of 113 images for 182 min and the second consisted of 99 images for 166 min duration. For both datasets, we performed the data reduction and differential aperture photometry using [<https://github.com/lgrcia/prose>] <cit.>, which selected the optimum apertures for the photometric data extraction to be 6.94 pixels (4.16) for the first observation and 8.32 pixels (5) for the second.
§.§.§ SPECULOOS-North Artemis photometry
Two full transits of b were observed by the telescope Artemis of the SPECULOOS Northern Observatory (SNO, ), located at the Teide Observatory (Canary Islands, Spain). Artemis is operated in a fully automated manner and equipped with Andor iKon-L camera with a 2K×2K deep-depletion CCD, which has a pixel scale of 0.35 per pixel. The first transit was observed on 09 August 2021 in the Sloan r' filter with an exposure time of 100 s. We gathered 156 images over 319 minutes. The second transit was observed on 02 September 2021 in an I+z filter (Johnson/Cousins I + Sloan z') with an exposure time of 20 s. We gathered 641 images during an observational window of 325 minutes. Both datasets were calibrated, and the differential aperture photometry were performed using the pipeline <cit.>. The aperture radii used were 5.0 pixels (1.75) for the first observation and 8.5 pixels (2.97) for the second.
§.§.§ SAINT-EX photometry
More ground-based photometric time-series observations of b were obtained from the SAINT-EX observatory (Search And characterIsatioN of Transiting EXoplanets). SAINT-EX is a 1-m telescope in the F/8 Ritchey-Chrétien configuration and operated in fully robotic manner. It is equipped with an 2k×2k deep-depletion CCD camera with a pixel scale of 0.34 per pixel offering a field of view of 12'×12'. The telescope is allied to an ASTELCO equatorial NTM-1000 German mount associated with direct-drive motors that permits observation without a meridian flip. It is in fact a twin of the SPECULOOS-South and SPECULOOS-North telescopes, and it operates as part of the SPECULOOS survey <cit.>.
Two full transits were observed by SAINT-EX. The first on 21 September 2021 in an I+z filter for 314 min, in which we gathered 520 raw images with an exposure time of 19 s. And the second on 08 November 2021 in Sloan r' filter for 206 min, with an exposure time of 105 seconds where we gathered 104 images. The data reduction and differential aperture photometry were performed automatically using the pipeline. For more information on the SAINT-EX telescope and pipeline, we refer to <cit.>.
The aperture radii used were 6.5 pixels (2.28) for the first observation and 11.0 pixels (3.85) for the second.
§.§.§ LCOGT MUSCAT3 photometry
A full transit of b was observed simultaneously in Sloan-g', r', i', and Pan-STARRS z-short bands on UTC April 21, 2022 using the LCOGT 2 m Faulkes Telescope North at Haleakala Observatory on Maui, Hawaii. The telescope is equipped with the MuSCAT3 multi-band imager <cit.>. The raw images were calibrated using the standard LCOGT BANZAI pipeline <cit.>, and photometric measurements were extracted using AstroImageJ <cit.>. The light curve in the Sloan-g' filter was not selected to be included in the analyses because of the low signal-to noise ratio (S/N) that is due to the faintness of the star in this band.
§.§ Spectroscopic observations
With the aim of better constraining the stellar properties, we also performed spectroscopic observation detailed hereafter. The analyses are presented in Sect. <ref>.
§.§.§ IRTF/SpeX
We gathered a near-infrared spectrum of with the SpeX spectrograph <cit.> on the 3.2-m NASA Infrared Telescope Facility (IRTF) on 19 Oct 2021 (UT). The conditions were clear with a seeing of 10–12.
We followed the same observational design as other recent IRTF/SpeX observations of M-dwarf TOIs <cit.>. We used the short-wavelength cross-dispersed (SXD) mode with the 0.3× 15 slit aligned to the parallactic angle, which gives a set of spectra covering 0.75–2.42 μm with a resolving power of R∼2000. Nodding in an ABBA pattern, we collected 18 exposures of 64.9 s each, totaling 19.5 min on source. We collected a set of standard SXD flat-field and arc-lamp exposures immediately after the science frames, followed by a set of six, 2.8-s exposures of the A0 V star HD 172728 (V=5.7). We reduced the data using Spextool v4.1 <cit.>, following the instructions for standard usage in the Spextool User's Manual[Available at <http://irtfweb.ifa.hawaii.edu/ spex/observer/>]. The final spectrum has a median S/N per pixel of 68 with peaks in the J, H, and K bands of 98, 101, and 91, respectively, along with an average of 2.5 pixels per resolution element.
§.§.§ Shane/Kast
We obtained a low-resolution optical spectrum of on 27 Nov 2021 (UT) using the Kast double spectrograph <cit.> on the 3-m Shane Telescope at Lick Observatory. Conditions were partly cloudy with a seeing of 1. We obtained two sequential exposures of 1200 s (40 minutes total) through the red channel of Kast using the 600/7500 grism and 2-wide slit, providing spectra covering 5900–9200 Å at an average resolving power of R ≈ 1900. We observed the spectrophotometric calibrator Feige 110 <cit.> earlier that night for flux calibration, and the G2 V star HD 205113 (V = 6.87) immediately after for telluric absorption calibration. Flat-field and arc line lamps were obtained at the start of the night for flux and wavelength calibration. Data were reduced and analyzed using the package[<https://github.com/aburgasser/kastredux>], with standard settings for image reduction and calibration, boxcar extraction of the spectrum, wavelength calibration, flux calibration, and telluric absorption calibration. The final spectrum has a S/N = 150 at 7500 Å and wavelength accuracy of 0.51 Å (22 km/s).
§.§ High-Resolution Imaging
As part of our standard process for validating transiting exoplanets to assess the possible contamination of bound or unbound companions on the derived planetary radii <cit.>, we observed the with near-infrared adaptive optics (AO) imaging at Keck and Shane Observatories. DR3 is also used to provide additional constraints on the presence of undetected stellar companions as well as wide companions.
§.§.§ Keck-II near-infrared adaptive optics omaging
The Keck Observatory observations were made with the NIRC2 instrument on Keck-II behind the natural guide star AO system <cit.> on 28 Aug 2021 UT in the standard three-point dither pattern that is used with NIRC2 to avoid the left lower quadrant of the detector, which is typically noisier than the other three quadrants. The dither pattern step size was 3 and was repeated twice, with each dither offset from the previous dither by 0.5. NIRC2 was used in the narrow-angle mode with a full field of view of ∼10 and a pixel scale of approximately 0.0099442 per pixel. The Keck observations were made in the K filter (λ_o = 2.196; Δλ = 0.336 μm) with an integration time of 1 second for a total of 9 seconds on target.
The AO data were processed and analyzed with a custom set of IDL tools. The science frames were flat-fielded and sky-subtracted. The flat fields were generated from a median average of dark-subtracted flats taken on-sky. The flats were normalized such that the median value of the flats is unity. The sky frames were generated from the median average of the dithered science frames; each science image was then sky-subtracted and flat-fielded. The reduced science frames were combined into a single combined image using an intra-pixel interpolation that conserves flux, shifts the individual dithered frames by the appropriate fractional pixels, and median-coadds the frames. The final resolutions of the combined dithers were determined from the full-width half-maximum of the point spread functions (0.056 for the Keck observations).
The sensitivities of the final combined AO image were determined by injecting simulated sources azimuthally around the primary target every 20^∘ at separations of integer multiples of the central source's FWHM <cit.>. The brightness of each injected source was scaled until standard aperture photometry detected it with
5σ significance. The resulting brightness of the injected sources relative to set the contrast limits at that injection location. The final 5σ limit
at each separation was determined from the average of all of the determined limits at that separation and the uncertainty on the limit was set by the rms dispersion of the azimuthal slices at a given radial distance. The Keck data have a sensitivity close-in of δmag = 2.9 mag at 0.06, and deeper sensitivity at wider separations (δmag = 6.5 mag at ≳0.4). The final sensitivity curve for the Keck is shown in Fig <ref>. No close-in (≲ 1) stellar companions were detected by Keck.
§.§.§ Shane near-infrared adaptive optics imaging
We observed TIC 259168516 on UT 2021 June 2 using the ShARCS camera on the Shane 3-meter telescope at Lick Observatory <cit.>. The observation was taken with the Shane adaptive optics system in natural guide star mode. The final images were constructed using sequences of images taken in a four-point dither pattern with a separation of 4 between each dither position. Two image sequences were taken of this star: one with a Ks filter (λ_0 = 2.150 μm, Δλ = 0.320 μm) and one with a J filter (λ_0 = 1.238 μm, Δλ = 0.271 μm). A more detailed description of the observing strategy and reduction procedure can be found in <cit.>. The contrast curves extracted from these observations are shown in Fig <ref>. With the Ks filter, we achieve contrasts of 2.5 at 1 and 4.4 at 2. With the J filter, we achieve contrasts of 2.8 at 1 and 4.0 at 2. We detect one companion about 58 west of TIC 259168516 that is 5.0 magnitudes fainter in Ks and 5.7 magnitudes fainter in J. Based on this, the star is likely the known neighbor TIC 1884271108. Gaia EDR3 parallax and proper motion indicate that it is another line-of-sight star.
§.§ assessment
In addition to the high resolution imaging, we have used to identify any wide stellar companions that may be bound members of the system. Typically, these stars are already in the Input Catalog and their flux dilution to the transit has already been accounted for in the transit fits and associated derived parameters from the TESS PDC-SAP photometry. There are no additional widely separated companions identified by that have the same distance and proper motion as <cit.>.
Additionally, the DR3 astrometry provides additional information on the possibility of inner companions that may have gone undetected by either or the high resolution imaging. The renormalised unit weight Error (RUWE) is a metric, similar to a reduced chi-square, where values that are ≲ 1.4 indicate that the astrometric solution is consistent with the star being single whereas RUWE values ≳ 1.4 may indicate an astrometric excess noise, possibly caused by the presence of an unseen massive (stellar) companion <cit.>. has a DR3 RUWE value of 1.05 indicating that the astrometric fits are consistent with the single star model.
§ ANALYSES
§.§ Stellar characterization
§.§.§ Spectroscopic analysis
The Shane/Kast optical and IRTF/SpeX near-infrared spectra allow us to assess 's fundamental stellar properties.
Using tools in the package, we compared the optical spectrum to the SDSS M dwarf templates of <cit.>, and found a best overall match to the M5 template (Fig. <ref>). Spectral indices from <cit.>, <cit.>, <cit.>, and <cit.> are more consistent with an M4 classification, suggesting an intermediate type of M4.5±0.5.
The ζ metallicity index of <cit.>, based on relative strengths of TiO and CaH features, is measured to be 1.025±0.002, consistent with a metallicity of [Fe/H] = +0.04±0.20 based on the calibration of <cit.>.
We see no evidence of Hα emission in the Balmer line at 6563 Å (equivalent width limit of <0.3 Å), suggesting an age ≳4–7 Gyr <cit.>.
The SpeX SXD spectrum of is shown in Fig. <ref>.
We used the SpeX Prism Library Analysis Toolkit <cit.> to compare the spectrum to that of single-star spectral standards in the IRTF Spectral Library <cit.>, finding the best single match to the M3.5 standard Gl 273. We note that the shape of the spectrum of suggests it is cooler than the M3.5 standard, though the M4.0 standards in the library give poorer fits.
We adopt an infrared spectral type of M3.5 ± 0.5, earlier but consistent with the optical classification. After adjusting for a barycentric velocity of -1.64 km/s, we cross-correlated the SpeX spectrum of with the rest-frame velocity of the M3.5 standard to determine the radial velocity. Determining the uncertainty of the cross-correlation with a Monte Carlo approach, we estimate a radial velocity of -34.3 ± 3.3 km/s. After applying a radial-velocity correction, we confirmed that the best-fit spectral standard did not change.
The SpeX spectrum also provides an estimate of stellar metallicity.
Using SPLAT, we measured the equivalent widths of the K-band Na i and Ca i doublets and the H2O–K2 index <cit.>.
We then used the <cit.> relation between these observables and [Fe/H] to estimate the stellar metallicity, propagating uncertainties using a Monte Carlo approach (see ).
We determined a metallicity of [Fe/H] = -0.32 ± 0.13, which is lower than but formally consistent with the optical measurement and more in line with the apparent old magnetic activity age of the star.
§.§.§ Empirical relations
We used available empirical relationships appropriate for M dwarfs to determine the stellar parameters of . We first used EDR3 parallax and 2MASS m_K visual magnitude to calculate the M_K absolute magnitude and found M_K = 7.9720 ± 0.0203. We then used the empirical relationship between the mass and M_K absolute magnitude of <cit.> to estimate the mass of , which we found to be M_⋆ = 0.1800 ± 0.0043 . This is in good agreement with the mass value of M _⋆ = 0.1765 ± 0.02 , estimated using the mass–luminosity relation in the K-band from <cit.>, where the uncertainty is dominated by the scatter in the mass-Ks relation.
Using the empirical polynomial relation between the stellar radius R_* and M_K derived by <cit.>, we estimated R_* to be 0.2130 ± 0.0064 , with a typical uncertainty of 3% as reported in Table 1 of <cit.>. As an independent check, we used the mass–radius relationship of <cit.> to determine the radius from the masses we previously estimated. We found
R_*=0.2075 ± 0.0039 , which is consistent with the aforementioned radius determination. This leads to a stellar density of 27.4±2.6 g cm^-3.
As for the effective temperature determination, we first estimated the bolometric correction in the K-band to be BC_K = 2.7414 ± 0.0822 mag, by making use of the empirical polynomial relation between BC_K and V-J of <cit.>. Then, we determined a bolometric magnitude of M_bol = 10.72 ± 0.085 mag, which gives a bolometric luminosity of L_* = 0.004353 ± 0.000340. The Stefan-Boltzmann Law, along with the aforementioned stellar radius and bolometric luminosity, gives an effective temperature T_ eff = 3210 ± 62 K. Independently, we also determined the effective temperature based on the empirical relation of <cit.> using the color indexes V-J and J-H, and found T_ eff = 3224 ± 100 K. The two values are consistent within 1σ.
§.§.§ SED fitting
As an independent check, we used the analyses package <cit.> to perform an analysis of the broadband spectral energy distribution (SED) of using MIST stellar models <cit.> to determine the stellar parameters. We pulled the JHK_s magnitudes from the 2MASS catalog <cit.>, WISE1-WISE4 magnitudes from the AllWISE catalog <cit.> and the G G_BP G_RP magnitudes from EDR3 (see Table <ref>). We performed the fit with T_ eff, R_* and M_* as free parameters with a Gaussian prior on the EDR3 parallax which we corrected for systematics by subtracting -0.041867248 mas from the nominal value according to the <cit.> prescription.
We set an upper limit on the extinction of A_V = 0.29233 from the dust maps of <cit.> and a Gaussian prior on the stellar metallicity from IRTF/SpeX (see Table <ref>). The SED fit results, reported in Table <ref>, are in excellent agreement with our previous determinations.
§.§ Statistical validation
To statistically validate b, we used [<https://github.com/stevengiacalone/triceratops>] <cit.>, which validates planets by simulating astrophysical false positives arising from gravitationally bound stellar companions, chance-aligned foreground or background stars, and known nearby stars that are blended with the target in the TESS data. It uses a Bayesian tool that incorporates prior knowledge of the target star, planet occurrence rates, and stellar multiplicity to calculate the false positive probability (FPP) and nearby false positive probability (NFPP). The FPP quantity represents the probability that the observed transit is due to something other than a transiting planet around the target star and the NFPP quantity represents the probability that the observed transit originates from a resolved nearby star rather than the target star. <cit.> state that for a planet to be statistically validated it must have FPP < 0.015 and NFPP < 0.001.
We first applied to the 2-min-cadence light curve supplied with the contrast curve obtained by the NIRC2 speckle imaging in Sect. <ref>. The resulting FPP and NFPP values are 0.0018 ± 0.0001 and 0.0017 ± 0.0001 respectively. The FPP is good enough but NFPP is above the threshold to classify a validated planet <cit.>. Only three nearby stars were bright enough and close enough to the target star to cause nearby false positives: TIC 1884271108 (Δ T = 6.4, sep = 6), TIC 259168518 (Δ T = 1.3, sep = 30), and TIC 259168513 (Δ T = 2.6, sep = 37). However, because the event observed by TESS was confirmed to be on-target by our ground-based observations, we were able to rule out these stars as sources of false positives and set NFPP = 0 from the outset. The FPP is then reduced to 0.0001 ± 0.0001, which is low enough for validating the planet. Independently, we also used the light curves obtained by the Artemis 1-m and LCO-Hal 2-m telescopes as they present tighter photometric constraints than the TESS data. This was supplied with the same contrast curve mentioned above and without removing any nearby star. This yields FPP and NFPP values generally lower than 0.01 and 0.001 respectively. Therefore, we consider this candidate to be a validated planet.
§.§ Stellar activity
With an ecliptic latitude of β = +81.05 deg, is located near the northern ecliptic pole in the CVZ. Targets in the CVZ are highly valuable for extracting long-period rotation rates. We first visually inspected the PDC-SAP light curves and found no hints of rotational modulation nor evidence of flares. We then used the Systematics-Insensitive Periodogram ( [<https://github.com/christinahedges/TESS-SIP>]) to build a periodogram for photometric data from all the 19 sectors. This tool creates a Lomb-Scargle periodogram, while simultaneously detrending systematics using a similar method to that described in <cit.> for detrending systematics in the NASA Kepler/K2 dataset. Since the rotational period of the star might be removed by the PDC pipeline, uses the target pixel file (TPF) data, and apertures assigned by the Pipeline to reproduce a simple aperture photometry (SAP) light curves of the target. We applied this operation on all the 19 observing sectors. Searching for periods between 10 and 365 days, we applied on the target and on all the background pixels outside of the pipeline aperture in the TPFs for . The SIP powers, presented in Fig. <ref>, show a marked similarity between the periodograms of the target and of the background. For comparison, the lower panel shows the ratio of the target to the background powers. We do not see any significant peaks where the target would display power that would be substantially greater than that of the the background.
We also used on each sector alone. We did not find any clear stellar rotation signal nor consistency between the periodograms of any sectors. In short, there is no hint of rotational variability detected for in the dataset, which is consistent with the lack of detectable Hα emission in its optical spectrum.
Although the TESS CVZ light curves would encompass typical rotation periods for mid-M-dwarfs <cit.>, we searched for completeness other ground-based photometric archives. Our target is not part of the MEarth sample <cit.>. We analyzed the ASAS-SN light curves <cit.> for our object that had observations in V and g, spanning from June 2012 to June 2023. We do not find rotational modulation. We note that given the faintness of our target in bluer bands (e.g., Gaia_BP = 16.27 mag), only ∼65% of the ASAS-SN observations were above the observational limit of each individual observation (with a median limiting magnitude of 16.832 mag in the full light curve). Furthermore, flares cannot be robustly detected in the ASAS-SN data because of its cadence <cit.>.
§.§ Transit modeling
We jointly analyzed the light curves from and ground-based instrumentation described in Sect. <ref> using the <cit.> software package. We included the photometric data described in Sect. <ref> from sectors 14 to 50. We detrended the ground-based light curves for the airmass, as well as for either the background or the half width at half maximum (HWHM) of the PSF. The choice of these parameters was based on the likelihood maximization. Some light curves were detrended only for airmass, especially the partial ones. Table <ref> shows the detrending parameters of each light curve. The detrending was done simultaneously with the transit fitting to ensure a good propagation of the uncertainties on the derived parameters. We fixed the eccentricity to zero assuming the orbit to be circular (see justification below). We set the flag to disable the MIST stellar track that constrains the star and, instead, we imposed Gaussian priors on the stellar mass (0.1800±0.0044), radius (0.2100±0.0064) and temperature (3224±100 K) from our determinations reported in Table <ref> as well as uniform priors on the period (P = 4.8026 ± 0.1P d) and transit epoch (T_c± P/3) from the values reported in ExoFOP. We set the flag to disable the Claret tables <cit.> that are used to fit the quadratic limb-darkening parameters u_1 and u_2 and we applied our own gaussian priors computed using the code <cit.> for each passband (see Table <ref>). TESS light curves are corrected for contamination, but we still fit for dilution of the transit signal in the band due to the neighboring stars using 0±10% of the contamination ratio reported in the TIC 259168516 on ExoFOP as Gaussian prior as recommended by <cit.> to account for any uncertainty in the correction. We ran the EXOFASTv2 analysis until convergence when the Gelman-Rubin statistic (GR) and the number of independent chain draws (Tz) were less than 1.01 and greater than 1000, respectively.
To test the impact of the detrending on our derived parameters, we performed independent analyses of the light curves with another code. We used the package <cit.> and linear models of detrending vectors (FWHM, airmass and background), as done in <cit.> and <cit.>. We found good agreement between the fits and therefore continued with for the full analysis. The detrended and modeled light curves are presented in Fig. <ref> and the phase-folded light curve is presented in Fig. <ref>. The results are reported in Table <ref>. We also fitted for an eccentric orbit to assess the evidence for orbital eccentricity using photometric-only data. A comparision of the loglikelihood of the two fits appears to favor a circular orbit.
We were concerned that the background level was overestimated and overcorrected in the SPOC pipeline in the northern year 2 sectors (14-26). Fitting only data from sectors 14 to 50 would lead to an overestimation of the planet radius by roughly 2%. However, this bias is comparable to the error bars on the planet radius and, the inclusion of the dilution term and additional ground-based data significantly mitigates the problem.
§ DISCUSSION
§.§ b and the radius valley
Studies performed by <cit.> on the Kepler small exoplanets have identified a radius valley roughly from 1.5 to 2 , separating rocky super-Earths and gaseous sub-Neptunes around Sun-like stars. A low-radius peak at 1.3 corresponds to high-density super-Earths and a high-radius peak at 2.4 corresponds to low-density sub-Neptunes with significant primordial H/He atmospheres. This gap is considered as possible transition region between rocky and icy "super-Earth" and "mini-Neptunes." <cit.> showed that the radius valley persists for low-mass stars (i.e., M ≲ 0.65 ).
Two contrasting theories have been presented to explain the radius valley. The first is gas-poor formation model which proposes that the radius valley is a feature intrinsic to the exoplanet population from formation onward <cit.>. Specifically, some planets are formed with extended H/He envelopes, whereas the population of rocky planets is formed later in a gas-poor environment after the gas is dissipated from the protoplanetary disk. The second is thermally driven atmospheric mass loss <cit.>, which proposes that the radius gap is formed through evolution after the gas accretion phase. That is, planets are formed with gaseous envelopes and some of them experience atmospheric escape through two scenarios: 1) photoevaporation <cit.> triggered by energetic EUV and X-ray flux from the host star in the first ∼100 Myrs <cit.> of the system and 2) core-powered mass-loss <cit.> triggered by the energy emergent from the cooling planetary core in a Gyr timescale <cit.>.
On the contrary, a recent study performed by <cit.> further suggests that there is a density gap, but not a radius gap, separating rocky planets and water-rich worlds with no planets with intermediate composition. Using a sample of 34 exoplanets with well-characterized densities around M-dwarf stars, they identified three populations: rocky planets (ρ=0.94±0.13 ρ_⊕), water-rich planets (ρ=0.47±0.05 ρ_⊕), and gas-rich planets (ρ=0.24±0.04 ρ_⊕). These study findings favor the pebble accretion model <cit.> as the main mechanism for forming small planets around M dwarfs, where rocky planets are formed within the ice line while water-rich planets are formed beyond the ice line and then migrated inwards. However, as for previous studies, the sample of exoplanets used in this study is not large enough to draw firm conclusions.
Figure. <ref> shows the current period–radius diagram of all known exoplanets with precise radius measurements orbiting M dwarfs. The empirical locations of the radius valley for FGK stars as predicted by thermally driven photoevaporation (dashed line) given by <cit.> and for low-mass stars as predicted by gas-poor formation (solid line) given by <cit.> are also displayed. With a radius of 1.466^+0.063_-0.049 and orbital period of 4.8026343±0.0000030 days, b is located close to the center of the super-Earth population where the two models predict the location of small rocky planets. With a future planetary mass determination, b can join the growing sample of small planets with precise bulk densities around M dwarfs.
§.§ Prospects for a radial velocity follow-up
The precise mass determination of b would allow us to better constrain the detectability of a possible atmosphere and better locate the planet in the radius-density gap. High-precision radial velocity (RV) measurements will not only allow us to constrain the planetary mass, but also constrain its orbit such as its eccentricity, which may shed some light onto the dynamical history of the system.
b has a radius of 1.466^+0.063_-0.049. Thus, we expect a RV semi-amplitude of 3.78^+1.3_-0.82 m s^-1, assuming a circular orbit and a mass of 3.18^+1.1_-0.69, as predicted from the mass–radius relation of <cit.>.
Not many high-precision spectrographs in the northern hemisphere are capable of detecting such a shallow signal from a faint target (V = 15.87 mag and J = 11.63 mag). Many typical planet finders, mounted on 2–4 m class telescopes such as CARMENES <cit.>, HARPS-N <cit.>, NEID <cit.> or EXPRES <cit.> have limiting magnitudes that are brighter than V=15.87. MAROON-X at the 8.1 m Gemini North telescope <cit.> has shown to reach the necessary precision for faint M-dwarf host stars <cit.>. Despite the high declination, it would be possible to reach a S/N of about 40 in the red arm after a 15 min exposure. Assuming a stellar activity level of < 1.5 m s^-1, this would allow for an overall precision of 0.7 m s^-1 precision with about 70 spectra. Thus, it will be possible to measure the planet mass with state-of-the-art instrumentation at 5 σ precision by investing about 29 h of telescope time on a 8-m-class telescope.
§.§ Potential for atmospheric characterization
We assess the potential for atmospheric characterization of b with using the transmission spectroscopy metric <cit.>. The TSM quantifies the expected S/N in the transmission spectrum of a given planet with a cloud-free atmosphere. Analytically, it is expressed as:
TSM = S×R_p^3T_eq/M_pR_*^2× 10^-m_j/5,
where R_p and M_p are the planetary radius and mass in Earth units, R_* is the stellar radius in Solar radii, T_eq is the equilibrium temperature of the planet in K and m_J is the apparent magnitude of the star in the J band. Also, S is a scale factor whose value depends on the planetary radius range.
b is a cool (T_eq < 500 K) super-Earth (R < 1.5). With J=11.6, the host star is within reach of the JWST NIRSpec/PRISM (0.6–5.3 μm) instrument <cit.>, which cannot observe stars brighter than J=10.5 without saturation. We used the TSM to assess the suitability of all cool (T_eq < 500 K) planets with radii smaller than 1.5 orbiting stars fainter than J=10.5 for atmospheric studies with this instrument. This sample of exoplanets contains 63 targets. We used the empirical mass–radius relation of <cit.> to estimate the mass of planets that do not have mass measurements as is the case for b. TSM values of these planets are shown in Fig. <ref>. We found that b has a TSM=7.82 which indicate that it could be a suitable target for transmission studies with the NIRSpec/PRISM instrument. Specifically, amongst 63 targets, b ranks as the thirteenth most amenable target for these studies. It follows all the TRAPPIST-1 planets <cit.>, Kepler-42 d <cit.>, K2-415 b <cit.>, LP 791-18 d <cit.>, LP 890-9 b <cit.> and TOI-237 b<cit.>. Moreover, the TSM is based on ten hours of observing time and with an ecliptic latitude of β = +81.05. b is located near the CVZ, and it has the advantage of being observable for about 250 days per year, which encourages its atmospheric characterization.
§ CONCLUSION
We have reported the discovery and initial characterization of b, a super-Earth orbiting a faint mid-M dwarf (V=15.87). We used the combination of 2-min cadence observations from 19 sectors, ground-based photometry, high-angular-resolution imaging and spectroscopic observations to validate its planetary nature. Joint analyses of and ground-based data yielded a planetary radius of 1.466^+0.063_-0.049 , an orbital period of 4.8026345^+0.0000040_-0.0000039 days and an equilibrium temperature of 404±14 K. According to the transmission spectroscopy metric (TSM) of <cit.>, b could be a promising candidate for atmospheric characterization with the . However, a stronger prediction of the expected S/N waits for direct mass measurement from radial velocity observations, which could be done with the MAROON-X instrument at the 8.1 m Gemini North telescope.
This research received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n^∘ 803193/BEBOP).
B.V.R. thanks the Heising-Simons Foundation for support.
TRAPPIST is funded by the Belgian Fund for Scientific Research (Fond National de la Recherche Scientifique, FNRS) under the grant FRFC 2.5.594.09.F, with the participation of the Swiss National Science Fundation (SNF).
The ULiege's contribution to SPECULOOS has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) (grant Agreement n^∘ 336480/SPECULOOS), from the Balzan Prize and Francqui Foundations, from the Belgian Scientific Research Foundation (F.R.S.-FNRS; grant n^∘ T.0109.20), from the University of Liege, and from the ARC grant for Concerted Research Actions financed by the Wallonia-Brussels Federation.
This work is based upon observations carried out at the Observatorio Astronómico Nacional on the Sierra de San Pedro Mártir (OAN-SPM), Baja California, México. SAINT-EX observations and team were supported by the Swiss National Science Foundation (PP00P2-163967 and PP00P2-190080),
the Centre for Space and Habitability (CSH) of the University of Bern, the National Centre for Competence in Research PlanetS, supported by the Swiss National Science Foundation (SNSF), and UNAM PAPIIT-IG101321.
The postdoctoral fellowship of KB is funded by F.R.S.-FNRS grant T.0109.20 and by the Francqui Foundation.
This work makes use of observations from the LCOGT network. Part of the LCOGT telescope time was granted by NOIRLab through the Mid-Scale Innovations Program (MSIP). MSIP is funded by NSF.
This research has made use of the Exoplanet Follow-up Observation Program (ExoFOP; DOI: 10.26134/ExoFOP5) website, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program.
Funding for the TESS mission is provided by NASA's Science Mission Directorate. KAC acknowledges support from the TESS mission via subaward s3449 from MIT.
Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center for the production of the SPOC data products.
This paper is based on observations made with the MuSCAT3 instrument, developed by the Astrobiology Center and under financial supports by JSPS KAKENHI (JP18H05439) and JST PRESTO (JPMJPR1775), at Faulkes Telescope North on Maui, HI, operated by the Las Cumbres Observatory.This work is partly supported by JSPS KAKENHI Grant Number JP18H05439
and JST CREST Grant Number JPMJCR1761.
This work was partially supported by a grant from the Erasmus+ International Credit Mobility program (M. Ghachoui).
MG is F.R.S-FNRS Research Director. LD is an F.R.S.-FNRS Postdoctoral Researcher.
J.d.W. and MIT gratefully acknowledge financial support from the Heising-Simons Foundation, Dr. and Mrs. Colin Masson and Dr. Peter A. Gilman for Artemis, the first telescope of the SPECULOOS network situated in Tenerife, Spain. F.J.P. acknowledges financial support from the grant CEX2021-001131-S funded by MCIN/AEI/ 10.13039/501100011033.
This publication benefits from the support of the French Community of Belgium in the context of the FRIA Doctoral Grant awarded to MT.
aa
§ TARGET PIXEL FILES OF
|
http://arxiv.org/abs/2307.05590v1 | 20230710174756 | Improved Efficiency and Accuracy of the Magnetic Polarizability Tensor Spectral Signature Object Characterisation for Metal Detection | [
"James Elgy",
"Paul D. Ledger"
] | math.NA | [
"math.NA",
"cs.NA",
"65N30, 35R30, 35B30"
] |
Improved Efficiency and Accuracy of the Magnetic Polarizability Tensor Spectral Signature Object Characterisation for Metal Detection
J. Elgy and P.D. Ledger
School of Computer Science and Mathematics, Keele University
Keele, Staffordshire U.K
corresponding author: [email protected]
=============================================================================================================================================================
§ ABSTRACT
Magnetic polarizability tensors (MPTs) provide an economical characterisation of conducting metallic objects and can aid in the solution of metal detection inverse problems, such as scrap metal sorting, searching for unexploded ordnance in areas of former conflict, and security screening at event venues and transport hubs. Previous work has established explicit formulae for their coefficients and a rigorous mathematical theory for the characterisation they provide.
In order to assist with efficient computation of MPT spectral signatures of different objects to enable the construction of large dictionaries of characterisations for classification approaches, this work proposes a new, highly-efficient, strategy for predicting MPT coefficients.
This is achieved by solving an eddy current type problem using hp–finite elements in combination with a proper orthogonal decomposition reduced order modelling (ROM) methodology and offers considerable computational savings over our previous approach.
Furthermore, an adaptive approach is described for generating new frequency snapshots to further improve the accuracy of the ROM. To improve the resolution of highly conducting and magnetic objects, a recipe is proposed to choose the number and thicknesses of prismatic boundary layers for accurate resolution of thin skin depths in such problems. The paper includes a series of challenging examples to demonstrate the success of the proposed methodologies.
keywords: Metal detection, magnetic polarizability tensor, eddy current problems, reduced order models, thin skin depth effects.
Conflict of Interest Statement
The authors have no conflicts of interests to declare.
Data Availability Statement
The authors' software used throughout this paper is publicly available at <https://github.com/MPT-Calculator/MPT-Calculator/> and datasets will be made public upon acceptance of the manuscript.
Funding Information
The authors are grateful for the financial support received from the Engineering and Physical Science Research Council (EPSRC, U.K.) through the research grant EP/V009028/1.
Practitioner Points
The main contributions of this paper are as follows:
* We have developed a new more efficient framework for computing the coefficients of the Magnetic Polarizability Tensor from reduced order model predictions using Proper Orthogonal Decomposition (POD). We demonstrate, with practical real world numerical examples, that this has lead to extremely significant time savings over our previous approach.
* We have developed a simple recipe for designing boundary layer discretisations for problems with thin electromagnetic skin depths.
* We demonstrate a Greedy algorithm for adaptively choosing new POD snapshot parameters and the performance benefit compared to a non–adaptive strategy.
§ INTRODUCTION
Metal detection uses low frequency magnetic induction to locate and identify highly conducting magnetic objects within the imaging region. Traditional metal detectors rely on simple thresholding and measured field perturbations are mapped to a simple audible tone to aid with detection. For this reason, hobbyist metal detectors wear headphones, listen for changes in pitch and volume, and have become trained in recognising the different signals that a detector receives for targets made of different materials (such as coins and other buried treasure) buried at different depths.
While risks are relatively low for the hobbyist metal detector, small objects close to the surface may give rise to similar signals to that of larger objects buried at depth and false positives are common. there are other safety critical applications of metal detection technology, such as in the location and identification of hidden unexploded ordnance (UXO) in areas of former conflict in order to allow the ground to be returned safely to civilian use, where minimising the number of false positives and false negatives is vital. In addition, at transport hubs, public events, and increasingly in some schools, the early identification of potential threat objects may offer significant improvements to ensuring the safety of travellers, participants and attendees, respectively. However, traditional walk-through metal detection methods, where people are expected to remove all metallic objects and are screened individually, can be slow and lead to long queues. In all these applications, improving the accuracy of metal detection technology has the potential to bring significant societal benefits. This paper contributes to this vision by providing an improved computational tool that can aid with accurately and efficiently characterising highly conducting and magnetic objects.
The signals measured by metal detectors contain considerably more information about the size, shape, material and location of hidden targets than the simple audible alarm may suggest. These signals are directly related to the perturbed magnetic field due to the presence of a highly conducting magnetic object.
In the case of a highly conducting spherical magnetic object, an analytical solution is available for the perturbation in magnetic field
(H⃗_α- H⃗_0)(x⃗) at positions x⃗ away from the object for low-frequency time harmonic fields in the eddy current regime. This includes results for where H⃗_0 is a uniform background field <cit.> and also where H⃗_0 is the background field due a dipole source <cit.>.
Related analytical and semi-analytical solutions have been obtained for highly conducting and magnetic spheroids <cit.> and ellipsoids <cit.>. More generally, approximate dipole models suggest that field perturbation due to the presence of a highly conducting magnetic object has a magnetic dipole moment that can be expressed in terms of a complex symmetric magnetic polarizbility tensor (MPT) and a background magnetic field at the position of the object <cit.> and MPTs have been approximately computed for objects with homogeneous conductivity and permeability by a variety of different schemes eg. <cit.>.
However, the dipole model only provides an approximation to (H⃗_α- H⃗_0)(x⃗). Its accuracy can be assessed by comparing it to a rigorously described asymptotic expansion of the perturbed magnetic field due to the presence of a highly conducting magnetic object B_α <cit.>
(H⃗_α - H⃗_0)(x⃗)_i = (D⃗^2 G(x⃗,z⃗))_ij (ℳ)_jk (H⃗_0 (z⃗))_k + (R⃗(x⃗))_i ,
which holds for x⃗ away from B_α, where | R⃗(x⃗)| ≤ C α^4 H⃗_0 _W^2,∞(B_α) is a residual with C being a constant independent of α, G(x⃗,z⃗):= 1/ ( 4 π | x⃗-z⃗|) is the free space Green's Laplace function, ℳ = (ℳ)_ije⃗_i ⊗e⃗_j is the complex symmetric MPT, which is shown to be independent of position <cit.>, e⃗_i is the ith orthonormal unit vector and Einstein summation convention has been applied. In the above, the object is described B_α = α B+z⃗, which means it can be thought of as a unit sized object B with the same shape as B_α, but placed at the origin, scaled by α and then translated by z⃗. This comparison was performed in <cit.> for the situation where H⃗_0 is a uniform background field and also when H⃗_0 is the background field due to a dipole source.
The advantages of the asymptotic expansion include that it provides a measure of accuracy of the approximation and has explicit expressions for computing the MPT coefficients <cit.>. Furthermore, these explicit expressions hold for the MPT characterisation of inhomogeneous objects <cit.> and multiple objects <cit.>
where
the electrical conductivity 0 ≪σ_*< ∞ and magnetic permeability 0 < μ_* < ∞ in the object are isotropic and independent of frequency ω, but not necessarily homogeneous. In general, the conductivity and magnetic permeability are described in the object and the surrounding region B_α^c= ℝ^3 ∖B_α by
σ_α = {[ σ_* in B_α; 0 in B_α^c = ℝ^3 ∖B_α ] ., μ_α = {[ μ_* in B_α; μ_0 in B_α^c ] . ,
and μ_0=4π× 10^-7 H/m the permeability of free space. It is also convenient to define μ_r: = μ_* /μ_0 as the (position dependent) relative magnetic permeability.
There are considerable benefits to exploiting the MPT's spectral signature <cit.>
(the variation of the MPT coefficients as a function of exciting frequency) compared to characterising an object by an MPT at a fixed frequency, which only characterises the object's shape and materials up to the best fitting ellipsoid. Improved object characterisations can also be obtained by using high order generalised magnetic polarizability tensors (GMPT) <cit.>, which provide additional information about the object. Both the MPT <cit.> and the GMPT <cit.> spectral signatures of different objects (including MPT characterisations of objects with inhomogeneous materials) have been experimentally verified using laboratory measurements and exhibit excellent agreement with theory.
A reduced order methodology using proper orthogonal decomposition (POD) has been developed for computing MPT spectral signatures <cit.> and this has been applied to computing dictionaries of objects <cit.> that have in turn been used for training machine learning classifiers to identify hidden objects <cit.>.
Through the addition of prismatic boundary layers, the MPT characterisation has been enhanced to consider highly magnetic objects <cit.>.
Practical applications of MPTs and related technology includes at transport hubs <cit.>, location and identification of landmines and UXOs <cit.>, food safety <cit.> and scrap sorting <cit.>.
While a reduced order modelling approach had been developed for computing MPT spectral signatures, our previous approach still requires large computational demands for complex realistic geometries, which impede its application to the characterisation of complex inhomogeneous realistic targets. Furthermore, it was not clear how best to choose the number or thickness of prismatic boundary layers in order to achieve accurate results nor was clear how best to choose the number of snapshots in order to achieve a reduced order model that is in close agreement to the underlying full order model. This work addresses these important shortcomings through the following novelties:
* A new computational formulation for efficiently computing MPT coefficients from POD predictions leading to very significant computational savings compared to our previous approach.
* An adaptive algorithm for choosing new snapshot solutions for improving the POD reduced order model.
* A new efficient strategy for designing boundary layer discretisations for capturing problems with thin skin depths.
* Application of our latest developments to a range of challenging realistic examples.
The paper is organised as follows: We begin with some brief comments on notation in Section <ref>. In Section <ref> we review the explicit formulae for computing the MPT coefficients, which includes, in Section <ref>, new discrete formulae for computing the MPT coefficients in terms of finite element matrices. Then, in Section <ref>, we briefly review the off-line and on-line stages of the POD projected (PODP) approach and explain how the calculation of the MPT coefficients can be considerably accelerated by combining the approach described in Section <ref> with a PODP reduced order description. In Section <ref>, we also outline an adaptive algorithm for computation of new snapshot frequencies to improve the accuracy of the POD approach. In Section <ref> we describe the computational resources used for the computational experiments conducted in this work and provide details of the specific versions of libraries used for simulations and where the open-access software, which accompanies this work, can be accessed from. Then, in Section <ref>, we describe a recipe for choosing the number and thicknesses of boundary layer discretisations in order to resolve thin skin depths associated with highly conducting and highly magnetic objects. Section <ref> presents a series of challenging examples to demonstrate the success of the proposed methodologies and the paper closes with some concluding remarks in Section <ref>.
§ NOTATION
We use calligraphic symbols e.g. ℳ for rank 2 tensors and denote their coefficients by (ℳ)_ij. By e⃗_i we denote the ith orthonormal basis vector and use bold face italics e.g. ξ for vector fields. We denote the components of vector fields by (ξ)_i = e_i ·ξ, which should be distinguished from e.g. θ_i^(0), which refers to the ith θ^(0) vector field. We use bold face upper case Roman font for linear algebra matrices e.g. 𝐀 and bold face lower case for linear algebra vectors 𝐛 and denote their coefficients by (𝐀)_ij and (𝐛)_i, respectively.
§ MAGNETIC POLARIZABILITY TENSOR OBJECT CHARACTERISATION
Recall that the complex symmetric MPT ℳ =(ℳ)_ije⃗_i ⊗e⃗_j has 6 complex coefficients (ℳ)_ij and admits the additive decomposition (ℳ)_ij:=(ℛ̃)_ij+( ℐ)_ij=(𝒩^0)_ij+(ℛ)_ij+( ℐ)_ij <cit.> where
(𝒩^0[ α B,μ_r] )_ij :=α^3δ_ij∫_B(1-μ̃_r^-1)ξ+α^3/4∫_B∪ B^cμ̃_r^-1∇×θ̃_i^(0)·∇×θ̃_j^(0)ξ,
(ℛ[α B, ω,σ_*,μ_r])_ij :=-α^3/4∫_B∪ B^cμ̃_r^-1∇×θ_i^(1)·∇×θ_j^(1)ξ,
(ℐ[α B, ω,σ_*,μ_r])_ij :=α^3/4∫_Bν(θ_i^(1)+(θ̃_i^(0)+e_i×ξ))·(θ_j^(1)+(θ̃_j^(0)+e_j×ξ))ξ,
are each the coefficients of real symmetric rank 2 tensors and where μ̃_r (ξ⃗) = μ_r(ξ⃗) for ξ⃗∈ B and μ_r (ξ⃗)=1 for ξ⃗∈ B^c:= ℝ^3 ∖B has been introduced.
In the above, δ_ij is the Kronecker delta, ν (ξ⃗) := α^2 ωμ_0 σ_*(ξ⃗), the overbar denotes the complex conjugate and θ⃗_i^(1)∈ℂ^3 is the solution of the vectorial transmission problem
∇×μ_r^-1∇×θ⃗_i^(1) -νθ⃗^(1) = νθ⃗_i^(0) in B,
∇×∇×θ⃗_i^(1) = 0⃗ in B^c ,
∇·θ⃗_i^(1) = 0 in B^c,
[n⃗×θ⃗_i^(1)]_Γ =0⃗, [n⃗×μ̃_r^-1∇×θ⃗_i^(1)]_Γ =0⃗ on Γ:=∂ B,
θ⃗_i^(1) = O ( |ξ⃗|^-1 ) as | ξ⃗ | →∞,
where ξ⃗ is measured from the origin, which lies inside B, [· ]_Γ denotes the jump over Γ and n⃗ is the unit outward normal. Note that θ⃗_i^(0) = θ̃⃗̃_i^(0) + e⃗_i ×ξ⃗∈ℝ^3 satisfies a simpler vectorial transmission problem that is independent of ν, but still dependent on μ_r <cit.>. The problem (<ref>) is set on an unbounded domain and, for the purposes of approximate computation, it is replaced by a problem on a bounded domain Ω with truncation in the form of a convex outer boundary ∂Ω placed sufficiently far from the object of interest B and n⃗×θ⃗_i=0⃗ applied on ∂Ω as an approximation to (<ref>e) leading to
∇×μ_r^-1∇×θ⃗_i^(1) -νθ⃗^(1) = νθ⃗_i^(0) in B,
∇×∇×θ⃗_i^(1) = 0⃗ in Ω∖B ,
∇·θ⃗_i^(1) = 0 in Ω∖B,
[n⃗×θ⃗_i^(1)]_Γ =0⃗, [n⃗×μ̃_r^-1∇×θ⃗_i^(1)]_Γ =0⃗ on Γ,
n⃗×θ⃗_i^(1) = 0⃗ on ∂Ω.
§.§ Finite Element Discretisation
As discussed in <cit.>, we employ a high order H⃗(curl) conforming finite element method (FEM) approximation on unstructured tetrahedral meshes of variable size h with elements of uniform order p of the form
θ_i^(1,hp)(ξ,ω) =∑_k=1^N_dN^(k)(ξ)(𝐪_i(ω))_k,
where N^(k) is a typical H(curl) conforming basis function and N_d are the number of degrees of freedom. The FEM approximation of (<ref>) for the ith direction then corresponds to the solution of the linear system of equations
A(ω)q_i(ω)=r(θ_i^(0,hp) , ω),
for the solution coefficients q_i(ω)∈ℂ^N_d in which the Coulomb gauge (<ref>c) is replaced by regularisation <cit.> and ω denotes the parameters of interest. In the above, A(ω)= K(μ_r) - ω C(α^2 μ_0 σ_*) +ε M∈ℂ^N_d× N_d is a large parameter dependent complex symmetric sparse matrix, where K is a curl-curl stiffness contribution, C a damping contribution associated with the conducting region B and M a mass contribution with small regularisation parameter ε to circumvent the Coulomb gauge in Ω∖B. In addition, r(θ_i^(0,hp), ω)∈ℂ^N_d is a known source term <cit.>[Equation (17)]. Compared to <cit.>, we additionally allow for the possibility to include prismatic layers to model thin skin depths, which we discuss further in Section <ref>. Following <cit.>, and noting the convention that p=0 elements have constant tangential components on edges, but consist of vector valued linear basis functions, we ensure that integration of element integrals is approximated by Gaussian quadrature of sufficient order so that it can integrate degree 2(p+1) polynomials exactly, independently of the geometry order c used to represent curved boundary and transmission faces. The linear system (<ref>) is solved to a relative tolerance TOL using a conjugate gradient solver and a balancing domain decomposition by constraints (BDDC) preconditioner <cit.>. The MPT coefficients follow by post-processing where attention must also be paid to using a Gaussian quadrature scheme of sufficient order (especially on elements where the curved boundary differs significantly from a linear approximation in addition to taking in to account the order of elements p).
§.§ Improved Efficiency for the Calculation of the MPT Coefficients
We focus on the improved efficiency for computing (ℛ[α B, ω,σ_*,μ_r])_ij and (ℐ[α B, ω,σ_*,μ_r])_ij that are functions of problem parameters ω. In the following, we focus on ω= ω although similar efficiencies could also be applied to other parameters of interest. First, for (ℛ[α B, ω,σ_*,μ_r])_ij, using (<ref>) and (<ref>), we have
(ℛ[α B, ω,σ_*,μ_r])_ij = - α^3/4∑_k=1^N_d∑_ℓ=1^N_d(𝐪_i)_k ∫_Ωμ̃_r^-1∇×N⃗^(k)·∇×N⃗^(ℓ)ξ⃗(𝐪_j)_ℓ
= - α^3/4𝐪_i^T 𝐊𝐪_j ,
where 𝐊∈ℝ^N_d × N_d and
(𝐊)_kℓ:=∫_Ωμ̃_r^-1∇×N⃗^(k)·∇×N⃗^(ℓ)ξ⃗,
is independent of ω.
Next, for (ℐ[α B, ω,σ_*,μ_r])_ij, we have
(ℐ[α B, ω,σ_*,μ_r])_ij = α^3/4 ( ∫_B νθ⃗_j^(1)·θ⃗_i ^(1)ξ⃗ + ∫_B νθ̃⃗̃_j^(0)·θ̃⃗̃_i^(0)ξ⃗ + ∫_B νe⃗_i ×ξ⃗·e⃗_j ×ξ⃗ξ⃗ + ∫_B νθ⃗_j^(1)·θ̃⃗̃_i^(0)ξ⃗ .
.
+ ∫_B νθ⃗_j^(1)·e⃗_ i ×ξ⃗ξ⃗ + ∫_B νθ̃⃗̃_j^(0)·θ⃗_i^(1)ξ⃗ + ∫_B νθ̃⃗̃_j^(0)·e⃗_i ×ξ⃗ξ⃗ .
.
+ ∫_B νe⃗_j ×ξ⃗·θ⃗_i^(1)ξ⃗ + ∫_B νe⃗_j ×ξ⃗·θ̃⃗̃_i^(0)ξ⃗ ),
and, since we know (ℐ[α B, ω,σ_*,μ_r])_ij = (ℐ[α B, ω,σ_*,μ_r])_ji∈ℝ, then
(ℐ[α B, ω,σ_*,μ_r])_ij = α^3/4 ( ∫_B νθ̃⃗̃_i^(0)·θ̃⃗̃_j^(0)ξ⃗ + ∫_B νe⃗_i ×ξ⃗·e⃗_j ×ξ⃗ξ⃗ + ∫_B νθ̃⃗̃_j^(0)·e⃗_i ×ξ⃗ξ⃗ .
+∫_B νθ̃⃗̃_i^(0)·e⃗_j ×ξ⃗ξ⃗+ Re (
∫_B νθ⃗_j^(1)·θ⃗_i ^(1)ξ⃗ + ∫_B νθ⃗_j^(1)·θ̃⃗̃_i^(0)ξ⃗ .
. . + ∫_B νθ⃗_i^(1)·θ̃⃗̃_j^(0)ξ⃗
+ ∫_B νθ⃗_j^(1)·e⃗_ i ×ξ⃗ξ⃗ + ∫_B νθ⃗_i^(1)·e⃗_ j ×ξ⃗ξ⃗ )
) .
Writing θ̃_i^(0,hp)(ξ,ω)=∑_k=1^M_dÑ^(k)(ξ)o_k,i, recalling ν (ξ⃗)= ωα^2 μ_0 σ_*(ξ⃗) = ων̃(ξ⃗), and following a similar procedure to above then
(ℐ[α B, ω,σ_*,μ_r])_ij = ωα^3/4 ( 𝐨_i^T 𝐂^(1)𝐨_j + c_ij + 𝐬_i^T 𝐨_j +
𝐬_j^T 𝐨_i +
Re (
𝐪_i^T 𝐂𝐪_j + 𝐨_i^T 𝐂^(2)𝐪_j+
𝐨_j^T 𝐂^(2)𝐪_i . .
. . +
𝐭_i^T 𝐪_j + 𝐭_j^T 𝐪_i
) ),
where
(𝐂 )_k ℓ := ∫_B ν̃N⃗^(k)·N⃗^(ℓ)ξ⃗,
(𝐂^(1))_k ℓ := ∫_B ν̃Ñ⃗̃^(k)·Ñ⃗̃^(ℓ)ξ⃗, (𝐂^(2))_k ℓ := ∫_B ν̃Ñ⃗̃^(k)·N⃗^(ℓ)ξ⃗,
(𝐬_i)_k := ∫_B ν̃e⃗_i ×ξ⃗·Ñ⃗̃^(k)ξ⃗, (𝐭_i)_k: = ∫_B ν̃e⃗_i ×ξ⃗·N⃗^(k)ξ⃗, c_ij := ∫_B ν̃e⃗_i ×ξ⃗·e⃗_j ×ξ⃗ξ⃗,
and 𝐂∈ℝ^N_d × N_d, 𝐂^(1)∈ℝ^M_d ×M_d, 𝐂^(2)∈ℝ^M_d ×N_d, 𝐬_i ∈ℝ^M_d, 𝐭_i ∈ℝ^N_d, c_ij∈ℝ are independent of frequency. Hence, we have reduced the computation of (ℛ[α B, ω,σ_*,μ_r])_ij and (ℐ[α B, ω,σ_*,μ_r])_ij to matrix vector products where the matrices can be precomputed and stored as they are all independent of ω.
The number of matrices that need to be formed when computing (<ref>) can be reduced from 6 to 3 by using the same basis functions for both the θ̃^(0)_i and θ^(1)_i problems. In this case N⃗^(k) = Ñ⃗̃^(k) and 𝐂 = 𝐂^(1) = 𝐂^(2)∈ℝ^N_d × N_d and t⃗_i = s⃗_i ∈ℝ^N_d. Using the same basis functions in both problems requires that basis functions that are gradient terms in the basis <cit.> remain in B_α for the θ̃^(0)_i problem and these must be removed via postprocessing <cit.>.
§ PODP APPROACH
We begin by briefly reviewing the off-line and on-line stages of the PODP approach and show the calculation of the MPT coefficients can be considerably accelerated by combining the approach <ref> with a PODP reduced order description.
Then, we consider how the error bounds derived in Lemma 1 of <cit.> can be used to drive an adaptive procedure for improving the spectral signature.
§.§ Off-line Stage
Following the solution of (<ref>) for 𝐪_i(ω) for different values of the set of parameters, ω, we construct matrices 𝐃_i∈ℂ^N_d× N, i=1,2,3, each with the vector of solution coefficients as its columns in the form
𝐃_i =[𝐪_i(ω_1),𝐪_i(ω_2),...,𝐪_i(ω_N)],
where N≪ N_d denotes the number of such snapshots. Application of a singular value decomposition (SVD) e.g. <cit.> gives
𝐃_i=𝐔_i Σ_i 𝐕_i^H,
where 𝐔_i∈ℂ^N_d× N_d and 𝐕_i ∈ℂ^N× N are unitary matrices and Σ_i∈ℝ^N_d× N is a diagonal matrix enlarged by zeros so that it becomes rectangular. In the above, 𝐕_i^H= 𝐕_i^T is the Hermitian of 𝐕_i.
The diagonal entries (Σ_i)_jj=s_j are the singular values of 𝐃_i and they are arranged as s_1>s_2>...>s_N, which decay rapidly towards zero, motivating the introduction of the truncated SVD (TSVD) e.g. <cit.>
𝐃_i≈𝐃_i^M = 𝐔_i^MΣ_i^M(𝐕_i^M)^H,
where 𝐔_i^M∈ℂ^N_d× M are the first M columns of 𝐔_i, Σ_i^M∈ℝ^M× M is a diagonal matrix containing the first M singular values and (𝐕_i^M)^H∈ℂ^M× N are the first M rows of 𝐕_i^H where M follows from retaining singular values s_1,…,s_M where s_M is the largest singular value such that s_M /s_1 < TOL_Σ.
§.§ On-line Stage
In the on-line stage of PODP, 𝐪_i^PODP ( ω) ≈𝐪_i(ω) is obtained as a linear combination of the columns of U_i^M where the coefficients of this projection are contained in the vector p_i^M ( ω) ∈ℂ^M so that
θ_i^(1,hp)(ξ,ω) ≈(θ_i^(1,hp))^PODP (ξ, ω) := N(ξ)
𝐪_i^PODP ( ω) =
N(ξ) 𝐔_i^Mp_i^M( ω) ∈ Y^(PODP) ,
where Y^(PODP)⊂ Y ∩ W^(hp) <cit.>. To obtain p_i^M ( ω), we solve the reduced linear system
𝐀_i^M(ω)𝐩_i^M( ω)=𝐫_i^M(θ_i^(0,hp), ω),
which is obtained by a Galerkin projection and is of size M× M where 𝐀_i^M(ω)=(𝐔_i^M)^H𝐀(ω)𝐔_i^M and 𝐫_i^M(θ^(0,hp) , ω)=(𝐔_i^M)^H𝐫 ( θ_i^(0,hp), ω). Note, since M<N ≪ N_d, the size of (<ref>) is significantly smaller than (<ref>) and, therefore, substantially computationally cheaper to solve. After solving this reduced system, and obtaining 𝐩_i^M(ω), we obtain an approximate solution for θ_i^(1,hp)(ξ,ω) using (<ref>). Moreover, the matrix 𝐀_i^M(ω) and right hand side 𝐫_i^M(θ^(0,hp) , ω) can be computed efficiently for new ω <cit.>.
§.§ Improved Efficiency for the PODP Prediction of MPT Coefficients
We outline how the efficiency of the PODP prediction of MPT coefficients can be significantly improved for the case where ω = ω and note that similar efficiencies can be gained when using PODP for other problem parameters. Since 𝐪_i^PODP ( ω) = 𝐔_i^Mp_i^M( ω), we can obtain the PODP prediction of the MPT coefficients as
(ℛ^PODP [α B, ω,σ_*,μ_r])_ij = - α^3/4𝐩_i^M^T 𝐊_ij^M 𝐩_j^M ,
where 𝐊_ij^M :=( 𝐔_i^M)^H 𝐊𝐔_j^M ∈ℂ^M × M can be precomputed once. Finally, for each new value of ω, the coefficients (ℛ^PODP [α B, ω,σ_*,μ_r ])_ij can be obtained from vector–matrix–vector products of small dimension M.
Similarly, we obtain
(ℐ^PODP [α B, ω,σ_*,μ_r])_ij = ωα^3/4 ( 𝐨_i^T 𝐂^(1)𝐨_j + c_ij +
𝐬_i^T 𝐨_j +
𝐬_j^T 𝐨_i+
Re (
𝐩_i^M^T 𝐂^M 𝐩_j^M + 𝐨_i^T 𝐂^(2),M𝐩_j^M+ . .
. .
𝐨_j^T 𝐂^(2),M 𝐩_i^M +
(𝐭_i^M)^T 𝐩_j^M + (𝐭_j^M)^T 𝐩_i^M ) ),
where 𝐂^M_ij : = ( 𝐔_i^M)^H 𝐂𝐔_j^M ∈ℂ^M × M , 𝐂^(2),M_j := 𝐂^(2)𝐔_j^M ∈ℂ^M_d × M, 𝐭_i^M =𝐭_i 𝐔_j^M ∈ℂ^M and further efficiencies are made by precomputing 𝐨_i^T 𝐂^(1)𝐨_j + c_ij +
𝐬_i^T 𝐨_j +
𝐬_j^T 𝐨_i and 𝐨_i^T 𝐂^(2),M.
Once 𝐊_ij^M, 𝐂^M_ij, 𝐭_i^M and 𝐨_i^T 𝐂^(1)𝐨_j + c_ij +2 𝐬_i^T 𝐨_j and 𝐨_i^T 𝐂^(3),M_j have been precomputed, the cost of computing (ℛ^PODP [α B, ω,σ_*,μ_r])_ij and (ℐ^PODP [α B, ω,σ_*,μ_r])_ij is at most that of computing several matrix vector products with 𝐩_i^M. The size of these matrices are independent of the mesh size and polynomial order used and are all either small square M × M matrices or vectors of length M and so the cost of evaluation is O(M^2) and the cost of solving (<ref>) is at most O(M^3). As M is small, this is considerably cheaper than the repeated solution of (<ref>) for new parameters, which is done iteratively and each iteration involves a matrix vector product requiring O(nz) operations where nz is the number of non-zeros of 𝐀. Furthermore, the aforementioned matrices and vectors needed for the PODP prediction can be computed once for all frequencies of interest. Further efficiencies can also be made by choosing N⃗^(k) = Ñ⃗̃^(k) as per Remark <ref>.
§.§ Adaptive Selection of Frequency Snapshots
The a–posteriori error estimate derived in <cit.> is restated below
An a-posteriori error estimate for the tensor coefficients computed using PODP is
| (ℛ[α B, ω,σ_*,μ_r ])_ij - (ℛ^PODP[α B, ω,σ_*,μ_r ])_ij |≤ (Δ[ω])_ij ,
| (ℐ[α B, ω,σ_*,μ_r ])_ij - (ℐ^PODP[α B, ω,σ_*,μ_r ])_ij | ≤ (Δ[ω])_ij,
where
(Δ[ω])_ij: = α^3/8α_LB ( r̂_i (ω) _Y^(hp)^2 + r̂_j (ω) _Y^(hp)^2 + r̂_i (ω) - r̂_j (ω) _Y^(hp)^2
) ,
and α_LB is a lower bound on a stability constant.
Note that the above error estimate also applies to both | (ℛ[α B, ω,σ_*,μ_r ])_ij - (ℛ^PODP[α B, ω,σ_*,μ_r ])_ij | and
| (ℛ̃[α B, ω,σ_*,μ_r ])_ij - (ℛ̃^PODP[α B, ω,σ_*,μ_r ])_ij | since (𝒩^0,PODP[α B,μ_r])_ij= (𝒩^0[α B,μ_r])_ij is computed once and independently of the POD.
To further improve the PODP technique, and overcome the potential issues of not choosing enough (or the best) snapshot frequencies, we use the adaptive procedure in Algorithm <ref>, following a Greedy approach where new snapshots are selected corresponding to the maximum error computed for that iteration <cit.>.
Here, 0<ϑ≤ 1 controls how many additional snapshots are generated in each adaption. Typically ϑ is chosen so that at most only 2-3 additional snapshots are included at each iteration.
§ COMPUTATIONAL RESOURCES AND SOFTWARE
The computational resources were used to perform the simulations in this paper correspond to workstations with the following specifications
* Workstation 1: Intel Core i5-10600 CPU with a clock speed of 3.30 GHz and 64GB of DDR4 RAM with a speed of 3200 MT/s
* Workstation 2: Intel Xeon W-2265 CPU with a clock speed of 3.50 GHz and 128GB of DDR4 RAM with a speed of 3200 MT/s
In the case of timings, all timings where performed using wall clock times with the package (version 0.61.0) and version 3.10. Finite element simulations were performed using and <cit.> versions 6.2204 and 6.2203, respectively, using 1.23.3. These libraries are called from the open source [ is publically available at <https://github.com/MPT-Calculator/MPT-Calculator/>] (April 2023 release) <cit.>.
In the off-line stage, two different forms of parallelism are applied. The assembly of the matrices and the underlying iterative solution of (<ref>), which requires repeated matrix-vector products in the conjugate gradient solver, is accelerated by using the shared memory parallelism across multiple computational threads as these operations are trivially parallelisable within . Importantly, this does not lead to further memory usage. Provided sufficient memory resources are available, the computation of the full order solutions for different ω is further accelerated by using multi-processing with different cases being considered simultaneously, which leads to higher memory demands.
In the on-line stage, the computation of the solution 𝐩_i^M to (<ref>) and the computation of
(ℛ^PODP [α B, ω,σ_*,μ_r])_ij and (ℐ^PODP [α B, ω,σ_*,μ_r])_ij using (<ref>) and (<ref>), respectively, which has already been reduced to small matrix vector products, is further accelerated by multi-processing in .
§ RECIPE FOR NUMBER AND THICKNESSES OF BOUNDARY LAYERS
The depth at which the amplitude of the electromagnetic field decays to 1/e of its surface value is known as the skin-depth and, for a homogeneous isotropic conductor, is commonly approximated by <cit.>
δ(ω, σ_*, μ_r) ≈√(2/ωσ_* μ_0μ_r),
which, for high σ_*, ω and μ_r, can become very small compared to the object dimensions.
Combining prismatic boundary layer elements with unstructured tetrahedral meshes with p–refinement of the elements has previously been shown to offer advantages over purely tetrahedral meshes for capturing the high field gradients associated with thin skin depths for θ⃗_i^(1) in the case of high σ_*, ω and μ_r <cit.>. Similar performance benefits have also been reported in other applications, such as the Maxwell eigenvalue problem <cit.> and singularly perturbed elliptic boundary value problems <cit.>. Here, the prismatic layers allow h–refinement to be achieved in a direction normal to the surface of the conductor, while leaving the tangential element spacing unchanged, which is ideal for addressing the high field gradients in the normal direction, but without resulting in a large increase in the number of degrees of freedom. On the other hand, using p–refinement alone on a traditional unstructured meshes is sub–optimal and converges only at an algebraic rate due to the small skin depths while attempting to do h–refinement of the unstructured tetrahedral mesh leads to an excessive N_d. For the PODP approach, the same FEM discretisation is needed for all ω snapshots and, in order to ensure that this is be accurate for the complete signature,
we fix a maximum target ω of interest and, for a given μ_r, we set τ:=√(2/ (μ_rν))= δ/α to be the smallest non-dimensional skin-depth
that is to be resolved.
Our previous work did not have a recipe for choosing the number of layers or indeed their thicknesses, which is essential for their practical application with a new geometry and/or new material parameters. Instead, the thickness of the layers were often chosen to model thin coatings (such as in some denominations of British coins). While offering considerable benefits, the inclusion of prismatic layers must nonetheless also be weighed up against the increase in computational resources (including both run time and memory usage). Our goal in this section is to determine a simple recipe for choosing the number and thicknesses of boundary layers that can be used to achieve a high level of accuracy at a reasonable computational cost.
To this end, we begin by considering a conducting homogeneous sphere of radius α=1× 10^-3 m and set σ_* = 1× 10^6 S/m while considering different cases of μ_r=1,16,64. For the approximate solution of θ⃗_i^(1) and, hence the MPT coefficients, we construct a computational domain Ω consisting of a unit radius sphere B placed centrally in a [-1000, 1000]^3 box and generate an unstructured mesh of 21 151 tetrahedra and represent the surface of the sphere by curved elements of degree c=5, which we use throughout.
The mesh, is augmented by the additional of prismatic layers placed just inside ∂ B. Three schemes for defining the structure of L prismatic layers are considered:
* “Uniform” refinement, where the total thickness of the layers is equal to τ and each layer of prismatic elements has thickness t_ℓ=τ/L, ℓ=1,…,L with t_1 being the closest to the conductor's surface and the layers numbered consecutively towards the inside of the conductor.
* “Geometric decreasing” refinement, where the total thickness is limited to τ and the thickness of each layer is defined by the geometric series t_ℓ+1 = 2t_ℓ with ∑_l=1^L t_ℓ = t_1 ( 1-2^L)/(1-2)= τ.
* “Geometric increasing” refinement, where the thickness of the layers are defined as t_ℓ+1 = 2t_ℓ with t_1 = τ. Recall that for highly magnetic objects, τ is small so for small L, the prismatic boundary layer elements are still thin compared to the size of the object.
An illustration of each refinement strategy is shown in Figure <ref> showing the thicknesses for L=3 layers of elements in terms of the non-dimensional skin depth τ for the three strategies.
We note that in each of these schemes, the total number of prismatic elements remains constant and, in the case of L=1, the uniform, geometric decreasing and geometric increasing strategies all lead to identical discretisations. This means that for a given mesh, order of elements and number of layers the number of degrees of freedom remains the same for all three strategies.
Figure <ref> shows the relative error between the approximated MPT and the exact solution for the sphere, E = ‖ℳ^hp - ℳ‖_F / ‖ℳ‖_F under p–refinement for each of the different approaches. L=1,2,3,4 layers and μ_r= 1, 16, 64, in turn, for the fixed target frequency ω = 1× 10^8 rad/s.
Here, and unless otherwise stated for subsequent simulations, the regularisation was set as ε = 1× 10^-10 and the iterative solver relative tolerance was set at TOL = 1× 10^-8.
For each value of μ_r, the convergence of E with respect to number of degrees of freedom, N_d, and E with respect to computational time (showing overall time using 2 cores on the workstation 1 described in Section <ref>) are shown.
In each case, the points on each curve correspond to different polynomial orders.
In the case of μ_r=1, and the chosen α, σ_* and ω and chosen initial mesh, the resulting skin depth can be resolved well by all schemes. With the exception of L=3 and the geometric increasing strategy, all schemes lead to similar convergence curves in terms of both N_d and time. A small benefit is observed for L=2 using the geometric increasing scheme over the other schemes. Indeed, for this case, using the initial tetrahedral mesh alone is already able to achieve exponential convergence when p–refinement is applied.
For μ_r>1 the uniform and geometric decreasing strategies are seen to produce similar results for all values of L while there is a considerable benefit in accuracy by using the geometric increasing scheme with L≥ 2 both with respect to N_d and computational time. As μ_r increases, further benefits in accuracy with respect to N_d and computational time are offered by using L≥ 3 and, by changing the scale, exponential convergence with respect to N_d^1/3, is obtained using sufficiently large L, which is the expected behaviour for this smooth problem. However, while combining p–refinement with L≥ 3 achieves very high accuracy, for practical problems, a relative error of E=1× 10^-3, is sufficient given the accuracy to which MPT coefficients can be measured and the ability to which materials and practical geometries are known. This level of accuracy can already be achieved using L=2, the geometric increasing scheme and p–refinement and, therefore, in the later practical computations, this is what we will employ. These findings are also consistent with the theory of <cit.>, which would suggest a first layer of thickness O((p+1)τ) if their findings for their one-dimensional problem are extrapolated to our three-diemsional problem.
We have also tested the same strategy for the same conducting sphere, but instead with μ_r=100,200,400, 800 and also observed that the same strategy of L=2 layers with geometric increasing refinement leads to a relative error of E=1× 10^-3 for the conducting sphere and a target frequency of ω = 1× 10^8 rad/s using order p=3 elements. The strategy has also been applied to objects with larger α, which represent more challenging problems, and the scheme has also performed well.
§ NUMERICAL EXAMPLES
In this section, we consider a range of numerical examples to illustrate the improvements in accuracy and speedup for calculation of the MPT tensor coefficients using PODP, the use of adaption to choose new frequency snapshots and the geometric increasing recipe proposed for the construction of prismatic layers in Section <ref>.
§.§ Conducting Sphere
We consider the conducing sphere described in Section <ref> for the particular case where μ_r=32. The computational domain Ω is discretised by 21 151 unstructured tetrahedra and L=2 layers of prismatic elements following the geometric increasing strategy resulting in 1275 prisms. Using TOL_Σ = 1× 10^-6, a total of N=13 solutions at logarithmically spaced snapshot (SS) frequencies 1× 10^1≤ω≤1× 10^8 rad/s are computed using order p=3 elements (leading to M=11) and we compare the MPT coefficients obtained using our previous approach <cit.> with the results obtained using the new accelerated computation described in Section <ref>. In Figure <ref>, we show a comparison between the MPT coefficients obtained using the previous approach (called the Integral method (IM)) and the new accelerated approach (called the Matrix method (MM), see Section <ref>) where we observe excellent agreement for all frequencies. Similar agreement can be found for spheres using other values of μ_r.
Timings were performed using workstation 1, as described in Section <ref>, for IM and MM methods for the MPT coefficient computation in the POD scheme for the this problem. When accelerated with 2 multiprocessing cores, and the use of multi-threading, as previously described in Section <ref>, we observe that the MM computation time is reduced from 12 164 seconds to 18 seconds giving an overall time of just 629 seconds. The breakdown in computational time will be expanded further for a more challenging example in Section <ref>.
The adaptive procedure outlined in Algorithm <ref> is demonstrated for the same discretisation in Figure <ref>, which shows the spectral signature for (ℛ̃[α B, ω,σ_*,μ_r ])_ij, that we subsequently refer to as (ℛ̃)_ij, including the a-posteriori error certificates (ℛ̃±Δ)_ij obtained at different iterations where (Δ)_ij
reduces as new SS are adaptively chosen. In this example TOL_Δ = 1× 10^-3, leading to the 4 graphs shown. Note that due to object symmetries the MPT is multiple of identity for this case.
Importantly, while the effectivity indices of the error certificates are large, they are computed at negligible additional cost during the on–line stage and, as the figures show, provide an effective way to choose new SS to reduce (Δ)_ij.
Only the behaviour for (ℛ̃)_ij are shown here, since the error certificates are the same for both (ℛ̃)_ij and (ℐ)_ij. To illustrate the performance of the adaptive POD, compared to non-adaptive logarithmically spaced SS, Figure <ref> shows the maximum error, Λ, against N. The figure shows the significant improvement associated with the adaptive POD compared to the non-adaptive scheme. Nevertheless, using a logarithmic SS with N=13 still provides a very good starting point for an initial choice of frequencies from which adaption can then be performed.
§.§ Conducting and Magnetic Disks
In this section, we consider the MPT characterisation of thin conducting and magnetic disks with their circular face in the x_1-x_3 plane.
We consider a disk with physical dimensions radius r=10 × 10^-3 m and thickness h=1 ×10^-3 m and start with μ_r = 32 and σ_*=1× 10^6 S/m. The computational domain Ω consists of a dimensionless disk B of radius r/α, and thickness h/α with α =1× 10^-3 m placed centrally in the box [-1000,1000]^3. The geometric increasing methodology from Section <ref> is applied to construct boundary layers for different values of μ_r at a target frequency of ω = 1 × 10^8 rad/s. Then, N=13 solutions are computed at logarithmically spaced SS frequencies in the range 1× 10^1≤ω≤1× 10^8 rad/s using TOL_Σ=1 × 10^-6, resulting in M=11. This process results in a discretisation of 24 748 tetrahedra and 2995 prisms with p=3 giving converged solutions at the snapshots.
Due to the symmetries of the disk, which, in addition to mirror symmetries, is rotationally symmetric around e_2, the non-zero independent tensor coefficients associated with the object reduce to (ℳ)_11 = (ℳ)_33 and (ℳ)_22 <cit.>. For this reason, in Figures <ref> and <ref> we show only (ℛ̃)_11 = (ℛ̃)_33, (ℛ̃)_22, (ℛ̃)_12 = (ℛ̃)_23 = (ℛ̃)_13 = 0 and (ℐ)_11 = (ℐ)_33, (ℐ)_22, (ℐ)_12 = (ℐ)_23 = (ℐ)_13 = 0. The figures show excellent agreement between the IM and MM methodologies.
To illustrate the adaptive procedure described by Algorithm <ref>, we show the spectral signature for (ℛ̃)_ij including the a-posteriori error certificates (ℛ̃±Δ)_ij obtained at different iterations for same magnetic disk in Figure <ref>. Starting with the setup used in Figure <ref> and a stopping tolerance of TOL_Δ = 1× 10^-3, we show the first 4 iterations of the adaptive algorithm resulting in N= 15, 17, 19 non-logarithmically spaced snapshots, respectively, in the subsequent 3 iterations in a similar manner to the earlier sphere example. The convergence behaviour for certificates for (ℐ +Δ)_ij is very similar and, therefore, not shown.
To illustrate the performance of the adaptive POD, compared to non-adaptive logarithmically spaced SS, Figure <ref> shows the maximum error, Λ, against N, which, like the earlier sphere example, shows significant benefits of the adaptive scheme over logarithmically spaced SS.
Considering the limiting case of an infinitely thin conducting non-magnetic disk in the x_1-x_3 plane and the corresponding limiting case when the disk is magnetic, based on measurements observations for finitely thick disks <cit.> (which is rotated to a disk in the x_1-x_2 plane for our situation), it has been proposed that the form of the MPT (when its coefficients are displayed as a matrix) undergo the transition
ℳ (α B, ω, σ_*,1)= [ 0 0 0; 0 c 0; 0 0 0 ]→ℳ (α B, ω, σ_*,μ_r →∞ ) = [ m 0 0; 0 m 0; 0 0 0 ]
as μ_r becomes large.
We would like to investigate the form this transition takes numerically for a finitely thick disk and wish to
understand the impact of changing the μ_r value. Thus, we consider the same conducting permeable disk as in Figure <ref>, but now with μ_r =1, 8,16, 64 in turn. In each case, different thickness prismatic layers were constructed according to the recipe in Section <ref> for a target value of ω =1 × 10^8 rad/s and then the MPT coefficients obtained by using a PODP method for N=13 snapshot solutions at logarithmically spaced frequencies in the range 1× 10^1≤ω≤1× 10^8 rad/s and TOL_Σ=1× 10^-6. We show the independent non-zero diagonal coefficients of ℛ̃ and ℐ. Interestingly, despite the relative simplicity of the geometry and its homogeneous materials, we observe the presence of multiple local maxima in (ℐ)_ij for the cases of μ_r =16 and μ_r=64. This can be explained by the spectral theory of MPT spectral signatures <cit.> where, in this case, the first term in the expansions in Lemma 8.5 is no longer dominant and multiple terms play an important role.
§.§ Inhomogeneous Bomblet
One potential practical application is the MPT characterisation of unexploded ordnance, which may assist in allowing the land to be safely released for civilian use in areas of former conflict. A common type of ordnance is the near spherical bomblet (e.g. a BLU-26 submunition <cit.>). This commonly consists of a die-cast aluminium shell, an explosive payload and fuze, aerodynamic flutes used to induce a rotation in the bomblet as it falls as part of the arming process, multiple (typically hundreds) small steel fragmentation balls, and a steel clamp ring <cit.>. As an example, Figure <ref> shows photographs of a recovered bomblet shell, showing the metallic ring, remnants of the fragmentation balls that are cast in the shell, and flutes. See also Figure 1 in <cit.>.
In the following we consider idealised models of the bomblet, first without fragmentation balls and then with them included. Throughout the following we make assumptions based on a sample part a BLU-26 bomblet and our understanding from the limited information openly available.
§.§.§ No Fragmentation Balls
While the flutes (which appear to be made of the same material as the shell <cit.>) are geometrically interesting (having a mirror symmetry and a 4-fold rotation symmetry <cit.>), with the steel clamp ring orientated in the same plane as the mirror symmetry, which we take to be x_1-x_2, the symmetries of this object imply the MPT will be diagonal with (ℳ)_11 = (ℳ)_22 and (ℳ)_33 as independent non-zero coefficients. But, given that the flutes make up only a small fraction of the overall volume, and removing them does not change which coefficients are independent, they are omitted. In this section, we consider two simplified idealised models: The first has a 3.14 cm radius a solid spherical aluminium ball with a steel clamp ring attached around the outside; the second is the same as the first but has a hollow spherical region of radius 2.4 cm.
To compute the MPT spectral signature of the first model, a computational domain Ω consisting of a dimensionless object B made up of a sphere of radius 3.14 cm joined to a clamp ring, which is modelled with radius 3.44 cm and height 0.31 cm, is centrally placed in a [-100, 100]^3 non-conducting region. The physical object B_α is obtained from the non-dimensional object B using a scaling of α =0.01 m. The material parameters are chosen as μ_r = 1 and σ_*=3× 10^7 S/m for the aluminium sphere and μ_r = 200, σ_*=6× 10^6 S/m for the steel ring.
Based on these materials, the resulting τ = δ / α at a maximum target frequency of ω=1× 10^8 rad/s for both materials is much smaller than those previously considered, making this problem considerably more challenging.
For this reason, boundary layer elements are added to both the non-magnetic sphere, and the magnetic ring with thickness chosen according to τ for each material. In particular, the geometric increasing refinement strategy with L=2 layers is used resulting in a mesh with 29 141 unstructured tetrahedra, and 6355 prisms, which was found to be converged at SS frequencies using uniform p=4 elements, resulting in a problem with N_d ≈ 1.23 × 10^6.
Due to the complexity and size of this object, we have increased the regularisation to ε = 1× 10^-8 and the iterative solver tolerance to TOL=1× 10^-7. For the POD, N=13 logarithmically spaced frequency SS was employed using TOL_Σ=1× 10^-6 and the resulting MPT signature obtained by applying the adaptive Algorithm <ref> after 4 iterations, where M=19 and TOL_Δ=10^-3, are compared in Figure <ref>.
The figure shows that the initial logarithmically spaced snapshots are in good agreement with the additional adaptively introduced snapshots and that the addition of new full order solutions does not significantly change the approximate tensor coefficients obtained by the POD method. The MPT spectral signature of the hollow bomblet, which is also shown, was obtained in a similar way using a discretisation with 30 379 tetrahedra and 8115 prisms and the same settings as before. As shown in the figure, The solid bomblet provides a good approximation for the hollow example. This agreement improves with frequency due to the decreasing skin depth and reduced contribution from the cavity.
We show contours of the computed | Re(θ_2^(1,hp))| (normalised to [0,1] to ease comparison) that are obtained as part of the solution process at the fixed frequencies ω = 1 × 10^2, 1 × 10^4, and 1 × 10^6 rad/s in Figure <ref>. The figure shows the contours on a plane chosen to be perpendicular to e_3, highlighting the decay of the fields inside the object as skin depth decreases. We use the same mesh, as the earlier discussion, however, we have increased the order to p=5 in order ensure a fine resolution of the field, which was not necessary for the MPT spectral signature due to the averaging through volume integration. The results highlight the extremely small skin depths associated with the higher frequencies, and that the fields concentrate along the magnetic ring, although in the case of ω= 1× 10^2 rad/s the wavelength is too large to induce significant eddy currents in the ring.
§.§.§ With Fragmentation Balls
Next we consider the case where steel fragmentation balls are included in the hollow bomblet. We believe that they are cast in to the aluminium shell, although we are not certain of this. We also do not know the number of balls, their size, or their location inside the bomblet. For our model, we have assumed that they are 2mm in radius and are placed approximately equidistant from each other throughout a layer adjacent to the interior of the shell. The properties of the balls are also unknown, but they are likely to be made of steel. We assume that they have material parameters μ_r =200, σ_*=6 × 10^6 S/m as per the assumed steel for the clamp ring. Importantly, introducing these fragmentation balls breaks the mirror and rotational symmetries of the bomblet, and so the MPT now has 6 independent coefficients as a function of ω.
We consider models with 100, 200, and 400 fragmentation balls which, given their size relative to the bomblet diameter, cannot be easily resolved without introducing an extremely large number of additional elements in the mesh or using a special geometrical techniques such as a NURB enhanced finite elements <cit.>. Given the aforementioned assumptions and approximations, we instead use an L_2 projection to project the expected varying distribution of μ_r and σ_* on to piecewise constant materials for a mesh consisting of 33 410 tetrahedra and 9 063 prisms.
To show the effect of increasing the number of fragmentation balls, we show a comparison between the hollow bomblet used for Figure <ref>, the addition of a steel layer in the interior shell and alternatively the addition of a layer with 100, 200, and 400 fragmentation balls. The effect of increasing the number of fragmentation balls is shown in Figure <ref> where (ℛ̃)_11, (ℐ)_11, (ℛ̃)_12, and (ℐ)_12 are shown as a sample of the 6 independent coefficients. The figure shows an increase in magnitude from the hollow bomblet solution towards the layered bomblet solution for the on-diagonal entries and the generation of off-diagonal terms in the MPT when the fragmental balls are added due to the breaks in the previous symmetries for the hollow and layered bomblets. The inclusion of the balls has important effects on the MPT signature and these differences may be useful when undertaking object classification.
§.§.§ Timings
As an illustration of timings for the challenging bomblet example, we consider the solid bomblet from Section <ref> and note the computations savings reported here also carry over to the other bomblet models and other challenging examples in general. Timings were performed using workstation 2 described in Section <ref> with comparisons made for IM and MM for the MPT coefficient computation in the POD scheme. In Figure <ref>, we show the wall clock timing break down for the aforementioned setup, accelerated with the use of multi-threading as previously described in Section <ref> and 2 multiprocessing cores. The timing is broken down in to the off-line stage of generating the mesh, computing the solution coefficients for the θ_i^(0,hp), computing the solution coefficients for the θ_i^(1,hp) snapshots, and computing the ROM (in which the TSVDs are obtained), and an on-line stage consisting of solving the smaller linear systems (<ref>) and computing the tensor coefficients. Comparing the IM and MM approaches, we see a significant saving in the final stage of computing the MPT coefficients, which, for this particular discretisation, required approximately 6× 10^4 seconds using the IM. In comparison, the computation of the MPT coefficients for the MM becomes negligible and reduces the overall computation to around 2× 10^4 seconds. There is an additional memory overhead in the MM approach, which is associated with the building of the larger matrices
𝐊, 𝐂, 𝐂^(1) and 𝐂^(2) of dimension N_d × N_d if M_d =N_d, which can be incorporated to the off-line stage of the ROM if desired. The on-line stage of the ROM only requires the smaller dimension matrices of size at most M× M with M ≪ N_d and the larger matrices can be disposed off once 𝐔_i^M are available. Nevertheless, this additional memory overhead is still less than the peak memory requirements during the computation of the solution coefficients for the θ_i^(1,hp) snapshots and so it is not of concern.
§ CONCLUSIONS
In this paper, we developed a new computational formulation for efficiently computing MPT coefficients from POD predictions. This, in turn,
has led to significant computational savings associated with obtaining MPT spectral signature characterisations of complex and highly magnetic conducting objects including magnetic disks of varying magnetic permeability and an idealised bomblet geometry. We have included timings to demonstrate the improvement in performance.
The paper has proposed a significant enhancement to our previous POD scheme by incorporating adaptivity where we choose new snapshot frequencies based on an a–posteriori error estimate computed, which is obtained at negligible computational cost. This adaptive method has been shown to provide an efficient way of choosing new frequency snapshots that leads in smaller a–posteriori error estimates compared to using the same number of logarithmically spaced frequency snapshots. The adaptive scheme is particularly useful for further refining the number and location of frequency snapshots given an initial set of logarithmically spaced snapshots.
In addition, we have significantly extended our earlier work <cit.> and provided a simple recipe to determine the number and thicknesses of prismatic boundary layers
so as to achieve accurate solutions under p–refinement. By considering a magnetic sphere (for which an exact solution is known), we have shown that by choosing suitable thicknesses of 2 layers of prismatic elements and p–refinement was sufficient to achieve a relative error E<1× 10^-3 for the MPT over a wide range of skin depths associated with materials and frequency excitations.
We have also included challenging realistic numerical examples that show the importance of using prismatic layers and the increased efficiency and improved accuracy of our new MPT calculation formulation.
We expect the procedures presented in this work to be invaluable for constructing large dictionaries of MPT characterisations of complex in-homogeneous realistic metallic objects. We also expect that the presented formulations for the adaptive POD, boundary layer construction, and the efficient post-processing to be transferable to the computation of GMPT object characterisations.
§ ACKNOWLEDGEMENTS
The authors would like to thank Prof. Peyton, Prof. Lionheart and Dr Davidson for their helpful discussions and comments on polarizability tensors.
The authors are grateful
for the financial support received from the Engineering and Physical Science Research Council (EPSRC, U.K.) through the research grant EP/V009028/1.
Both authors would also like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for their support and hospitality during the programme Rich and Non-linear Tomography - a multidisciplinary approach where work on this paper was undertaken and related support from EPSRC grant EP/R014604/1.
acm
|
http://arxiv.org/abs/2307.04011v1 | 20230708164347 | Robust Learning-Based Incipient Slip Detection using the PapillArray Optical Tactile Sensor for Improved Robotic Gripping | [
"Qiang Wang",
"Pablo Martinez Ulloa",
"Robert Burke",
"David Cordova Bulens",
"Stephen J. Redmond"
] | cs.RO | [
"cs.RO",
"cs.LG"
] |
[
Kipton Barros
August 12, 2023
===================
empty
empty
The ability to detect slip, particularly incipient slip, enables robotic systems to take corrective measures to prevent a grasped object from being dropped. Therefore, slip detection can enhance the overall security of robotic gripping. However, accurately detecting incipient slip remains a significant challenge. In this paper, we propose a novel learning-based approach to detect incipient slip using the PapillArray (Contactile, Australia) tactile sensor. The resulting model is highly effective in identifying patterns associated with incipient slip, achieving a detection success rate of 95.6% when tested with an offline dataset. Furthermore, we introduce several data augmentation methods to enhance the robustness of our model. When transferring the trained model to a robotic gripping environment distinct from where the training data was collected, our model maintained robust performance, with a success rate of 96.8%, providing timely feedback for stabilizing several practical gripping tasks. Our project website: <https://sites.google.com/view/incipient-slip-detection>.
§ INTRODUCTION
§.§ Background
Autonomous robots have yet to achieve human-like dexterity when performing gripping tasks, mainly due to a lack of satisfactory tactile perception and processing abilities. Studies have shown that even humans struggle with simple gripping tasks in the absence of tactile sensation <cit.>. The palm of the human hand contains ∼17,000 mechanoreceptors, i.e., specialized nerve endings that respond to mechanical stimuli such as deformation, pressure, and displacement <cit.>. These receptors play a crucial role in sensing and relaying tactile information to the nervous system <cit.>, allowing humans to adjust their grip in real-time to account for slipperiness and other factors. Building on these insights, researchers have designed tactile sensors replicating part of human hand sensing capabilities and explored slip detection techniques using these sensors to enhance robotic manipulation performance <cit.>.
§.§.§ Types of slip
The two main types of slip are gross slip and incipient slip. Gross slip refers to the occurrence of slip across the entire contact surface, where the relative motion between the gripper or tactile sensor and the gripped object is typically observable at a macro level <cit.>. On the other hand, incipient slip refers to the initial stage of slip, when parts of the contact surface slip while others remain stuck <cit.>. For example, when an object is held by elastic fingertips, and an external force is applied to the object in a direction tangential to the contact surface, some parts of the fingertips will stretch while others will compress, causing incipient slip at the periphery of the contact surface while the central part remains stuck. As the applied force increases, the slip will finally spread across the entire contact surface, leading to gross slip. Throughout the incipient slip phase, there may not be any observable relative motion between the object and the finger.
§.§.§ Slip detection and challenges
Previous studies have proposed techniques to detect gross slip and apply corrective measures when the slip is detected to prevent objects from dropping out of the grasp <cit.>. Detecting gross slip may not always be a wise strategy, as it occurs when the entire contact has already started slipping. On the other hand, detecting incipient slip can provide an early warning of an impending and more dangerous gross slip, allowing corrective measures to be applied earlier, and increasing the likelihood of maintaining a safe grip. However, detecting incipient slip is not trivial because it requires the contact interface of the sensor to possess adequate elasticity, enabling one part to undergo sufficient and detectable deformation, resulting in slip, while the other part remains stuck. Furthermore, validating incipient slip can be challenging since it is not generally associated with macro-level relative movement between the sensor/finger and the object. To verify the occurrence of incipient slip, researchers commonly utilize a camera to monitor the contact surface; by examining the camera images, they can visually confirm the presence of incipient slip events <cit.>. However, this method of relying on cameras may not be feasible in real-world situations, such as when gripping everyday objects.
§.§ Our contribution
Our study presents a new technique for detecting incipient slip using the PapillArray (Contactile, Australia) tactile sensor. This sensor features a square array of nine elastic silicone pillars with varying unloaded heights, promoting different normal forces on the pillars when pressed against a surface. This design enhances the likelihood of inducing incipient slip on shorter pillars when a tangential force is applied.
We utilized deep neural networks (NN) to develop our incipient slip detection algorithm, where we made novel use of the data gathered in a previous study <cit.> to construct the dataset for training and evaluating the NN. The primary objective of the NN was to classify inputs into two distinct categories: incipient slip and other, functioning as a binary classifier; other refers to all others states that are not incipient slip, such as gross slip or being stationary. Furthermore, the tactile data at hand is presented in the form of a uniformly-sampled time series. Therefore, to effectively capture the serial nature of the data, we utilize a recurrent neural network (RNN) <cit.>. The inclusion of historical data in a NN model has the potential to enhance its performance in real-time prediction tasks, as it enables the capture of temporal patterns and dependencies, leading to more robust and accurate forecasts <cit.>. We also propose several data augmentation methods designed to enhance the performance and robustness of our trained model, making it robust to environmental confounders.
§ RELATED WORK
Similar to the approach we will take in this paper, the approach proposed in <cit.> treats slip detection as a classification task; the authors employed a support vector machine <cit.> to detect slip using the velocity of embedded pins on the inner surface of a TacTip camera-based tactile sensor <cit.>. Labels of the training data are assigned manually based on the alignment of pin velocities. In a more recent study <cit.>, the authors modified the TacTip sensor used in <cit.> by introducing raised fingerprint-like ridges, decreasing skin thickness, and increasing pin spacing to reduce mechanical coupling between ridges and to create the traction differential and facilitating the shear displacement required for the occurrence of incipient slip. This is similar to the behavior seen on the human finger pad when sheared against an object, thus allowing the sensor to experience incipient slip. They used an external camera to monitor the contact in real-time for data labeling, and then employed a convolutional neural network <cit.> as a binary classifier to detect incipient slip.
The GelSight technology is another camera-based tactile sensing system that uses an elastic body to establish a contact with an object, with the built-in camera recording the resulting deformation to obtain tactile data <cit.>. An approach was introduced in <cit.> for detecting incipient slip using the GelSight sensor. This method determines the degree of incipient slip by analyzing the inhomogeneity of the displacement field, which is quantified in terms of entropy. More recently, a more advanced version of the GelSight technology, called GelSlim, was proposed in <cit.>; it employed the deviation of the deformation field from a 2D planar rigid displacement field to determine slip.
Compared to camera-based tactile sensors, the distributed optical sensor used in our work, the PapillArray, is less complex in terms of instrumentation<cit.>. It offers several advantages over other sensor designs, including size, temporal resolution, and compliance. A heuristic algorithm that employs the PapillArray tactile sensor to detect incipient slip is proposed in <cit.>. The approach is based on the observation that incipient slip happens when some sensor pillars stop deflecting at the same rate as the contacted object is moving in the sensor's frame of reference. Precisely, this approach detects slip by evaluating the tangential velocity drop with respect to a reference pillar, which is the pillar under the highest normal force (usually the center). In the case of rotational movements, with the center of rotation at the center pillar, the algorithm cannot detect any slip since no movement can be detected in the center pillar. This heuristic approach is further improved in <cit.> to account for rotational slips, detecting the deceleration of each pillar by comparing it to its own recent maximum velocity, and then it checks if other pillars are still in motion to confirm that the deceleration indicates an incipient slip. However, these methods may not be applicable when dealing with deformable or non-planar surfaces, or when only a subset of the pillars makes contact with the object. In such cases, establishing a dependable reference pillar to represent the object's movement in <cit.> becomes challenging; in <cit.>, it is difficult to determine whether the deceleration of pillars is caused by slip or by the shape of the object's surface.
In our work, we are motivated to take a learning-based approach in developing a dedicated incipient slip detection algorithm, where we propose domain adaptation techniques to enhance the robustness of our trained model, enabling it to effectively detect incipient slip for more realistic objects and contacts, overcoming the challenges outlined above.
§ MATERIALS AND METHODS
§.§ Hardware
§.§.§ Contactile sensor
Our study employed the commercial PapillArray sensor from Contactile[<https://contactile.com/>], depicted in Fig. <ref>, which is based on the concept described in <cit.>. The sensor outputs the real-time x-y-z force data experienced by each pillar at a high sampling rate of 1,000 Hz. Our training data was collected using the Dev Kit v1, while for the online evaluation of our trained model, we used the Dev Kit v2. Dev Kit v2 and Dev Kit v1 differ in size and the pillar Shore hardness.
§.§.§ Robotic gripping rig
Fig. <ref> displays the rig used in our study for the gripping task. The rig features a specialized two-finger gripper (RG2, OnRobot, Germany) with a blue adapter fixed to one of its fingers. This adapter serves to couple the Contactile PapillArray Dev Kit v2 sensor to the gripper finger. A white 3D-printed cuboid is used to extend another finger, matching the length of the finger equipped with the sensor. Moreover, a couple of ArUco markers are attached to this extended cuboid to track the gripper's pose. We replaced the original motor of the RG2 gripper with a stepper motor (MX-28, Dynamixel, US) to achieve high-frequency interruptible control of the gripper. The modified gripper was mounted on a six-axis robot arm (UR5e, Universal Robots, Denmark).
§.§ Data preparation
§.§.§ Collect slip data and annotate slip events for individual pillars
Our training dataset is sourced from <cit.>. In brief, the training data was acquired using a six-degree-of-freedom hexapod robot (H-820, Physik Instrumente, Germany) with the Contactile PapillArray Dev Kit v1 sensor mounted on the top. A transplant acrylic plate is fixed above the sensor on a T-slot frame and a video camera (Logitech Streamcam, Logitech, Switzerland) is positioned above the acrylic plate to capture videos of the contact between the sensor and the plate. During the data collection, the hexapod pushes the sensor vertically against the acrylic place and then moves it laterally to induce a slip. The horizontal movement could be a translation, a rotation, or a combination of both. A total of 200 data sequences were collected, covering a range of compression levels, hexapod movement velocities, and movement directions. The recorded videos are processed using the Matlab Computer Vision Toolbox (MathWorks, USA) to track the pillar tip position. The tangential pillar tip velocity is then used to label the slip state (gross slip or not gross slip) of individual pillars.
§.§.§ Collect control data
When the sensor is compressed against a flat surface and moved laterally, the tangential velocity measured by each pillar will increase at first, as the sensor starts deforming, before reaching a peak velocity and subsequently decreasing its speed when the pillar stops deforming (Fig. <ref>). If a pillar stops deforming because it is undergoing incipient slip, at least one other pillar will still be deforming laterally; this is observed by an asynchronous decrease of the tangential velocity of the nine pillars (Fig. <ref> - Slip). However, if the object stops moving before any slip occurs, the tangential velocity magnitude of the nine pillars decreases almost simultaneously (Fig. <ref> - Stop).
Since the stop events display similar temporal feature to slip events, we collected an additional dataset specifically focusing on stop events, consisting of a total of 28 data sequences. We label the data points in these sequences as other. By incorporating this dataset, the NN is less likely to misclassify between incipient slip and other, thereby improving the accuracy and reliability of the NN. The data collection process was similar to that of the slip events, except that the hexapod's movement was abruptly halted before any slip occurred. Further details on this process can be found in <cit.>.
§.§.§ Annotate the incipient slip
Based on the definition of incipient slip provided in Section <ref>, we annotate the incipient slip in the dataset as follows: we consider that incipient slip has occurred when at least one pillar slips with respect to the contact surface, while at least one other pillar remains stationary with respect to the contact surface. In other words, we start annotating incipient slip from the moment the first slip occurs on any pillar, and this interval continues until the time when all nine pillars have slipped. The slip label of each pillar is obtained as described in Section <ref>. It should be noted that when annotating incipient slip in the rotational data, we only consider the outer eight pillars. This is because the rotational movement is centered around the central pillar, which never slips by our definition (remains in the same location on the contact area), for our data set.
§.§.§ Refine data sequence
The sensor output exhibits variance due to noise and sporadically produces glitches that deviate significantly from the mean value, displaying sudden extreme highs or lows. To address these issues, we apply a median filter with a window size of 21 samples on the raw sensor signal, which is sampled at 1,000 Hz.
We divided the raw data sequence into non-overlapping windows, with each window containing 40 samples. This division reduced the data rate to 25 Hz. This was done for practical limitations in the hardware and software of our system. More precisely, the maximum refresh rate of our gripper servo is ∼62 Hz, and the computation rate of our classifier is ∼40 Hz. Moreover, it is worth noting that reliable gripping does not necessarily require a high sampling frequency. Indeed, humans have a reaction time of approximately 80-120 ms (equivalent to 8.3-12.5 Hz) <cit.>, enabling us to perform most everyday gripping tasks effectively.
Finally, we only consider the x-y forces on the pillars as input in NN training, while excluding the z force. During the data collection process, when the hexapod moves tangentially to induce slip, it remains stationary in the z direction. As a result, we assume that the z force does not play a significant role in detecting incipient slip in our case. It should be acknowledged that in real-world scenarios, the normal force can provide valuable information for humans to detect slip, and it is likely to vary appreciably for different gripping objectives. Therefore, another reason for excluding the z force is to prevent the NN from incorrectly learning that the z force remains relatively stable during slip events, as occurs in our data set.
§.§ Training data augmentation
§.§.§ Data augmentation by rotational symmetry
During the data collection process, the sensor is placed at the origin of the world coordinate frame. Its horizontal surface is parallel to the x-y plane of the world frame of reference, and the side edges align with the x-y axis directions. Hence we use a rotation transformation to augment the data; intuitively, it can be understood as rotating the initial position of the sensor around the z axis by a random angle. For each data point in a sequence, we perform the following mathematical calculations:
[ F_x'; F_y' ] = [ cos(θ) -sin(θ); sin(θ) cos(θ) ]·[ F_x; F_y ], θ∈[0,2π),
where F_x and F_y represent the force values along the original x-y axis, and F_x' and F_y' are the augmented force values after virtual rotation of the sensor by a randomly sampled angle, θ, from a uniform distribution of [0, 2π).
§.§.§ Advanced data augmentation for domain adaptation
The data used in our study was collected under idealized conditions, where a hexapod robot was used to compress the sensor against a flat surface and move laterally in a controled manner. In this setup, the force was nearly perpendicular or parallel to the contact surface and the movement speed is nearly constant. However, in real-world robotic gripping, the conditions are expected to be quite different from this idealized setup, and the performance of the model trained on such data is expected to be poor. We identify several issues that may arise when transferring the model trained on idealized data to real-world gripping scenarios, and we propose a range of advanced data augmentation methods to address these issues in the following paragraphs. These methods are designed to generate synthetic data that mimics the real-world variability of the gripping:
* Issue: The slipping velocity in real-world robotic gripping is not constant, as it is influenced by various factors such as gravity, friction, and the shape of the object being gripped. However, during the data collection process, the hexapod induces slip at a constant velocity. Remedy: We employ random sampling to sample a percentage of data points from the raw data sequence, thereby generating a new data sequence. And we maintain the frequency of the new sequence at the same rate as the raw sequence (1,000 Hz). This approach can simulate velocity variations to mimic real-world gripping scenarios, as it changes the magnitude differences of some temporally adjacent data points while keeping the time interval unchanged.
* Issue: In some gripping scenarios, a portion of the sensor pillars may not be in contact with the object. For instance, this can occur when employing sensors to grip an object with a rounded surface or when gripping an object smaller than the sensor's contact area. Remedy: To simulate an unloaded pillar, we substitute a number of pillar data sequences with zero sequences. Noise is then added to make the generated sequence resemble a realistic sensor signal. The noise is derived from a normal distribution with a mean of 0.0 N and a standard deviation of 0.001 N.
* Issue: Unlike with the hexapod, the force generated by the gripper may not be perfectly perpendicular to the x-y plane of the sensor frame of reference, and the force leading to slip may not be perfectly in this plane. This can occur when the gripped object is not flat or the mechanical linkage of the gripper flexes when applying force to the object. Remedy: First, we sampled nine individual pillar sequences from raw sensor sequences with different sensor compression levels and hexapod movement types, and then combined them to form a new sensor sequence. Secondly, we scaled (scale factor ranging from 0.2 to 2.0) the magnitude of values for a number of pillar sequences. Lastly, we randomly permuted the position (by pillar index) of a nine-pillar sequence. Employing these techniques can encourage the NN capture a broader and more comprehensive pattern of incipient slip (see Section <ref>), rather than only learning the limited pattern introduced by the hexapod.
§.§ Neural networks
The key decision making component of our incipient slip detection approach is a binary classifier. Initially, we trained a NN capable of estimating the probability of incipient slip for each time point in a sequence. Next, we set a threshold to convert the continuous probability into a binary output. To enhance the accuracy of the classifier, we used an ensemble technique that trains multiple independent classifiers concurrently and aggregates their output probabilities to produce the final decision (shown in Fig. <ref>).
§.§.§ Architecture
Fig. <ref> illustrates the process of inputting a data sequence into the NN and obtaining the corresponding slip classification. The modified data sequence, as explained in Section <ref>, is input into an encoder. Subsequently, the encoder output is passed to a specific type of RNN called a gated recurrent unit (GRU) <cit.>. In our approach, we utilize a single layer of GRU for each propagation step, and we refer to it as a GRU cell. The hidden output from the GRU cell is generated as a combination of the current input and historical information. Moreover, an estimator is included that takes the hidden layer output from the GRU cell and converts it into a probability estimation. The ground truth label of each window is determined by the label of the last sample in the window.
§.§.§ Training
The ensemble model consists of Z (Z=5 in our case) independently trained classifier models. During each training iteration of each classifier model, a subset comprising a proportion of λ sequences (λ=40% in our case) is randomly sampled with replacement from the entire training set and used for NN training. The final layer of the estimator utilizes a two-class softmax activation function, with its outputs interpreted as probabilities for the occurrence of incipient slip and other. Our chosen loss function is binary cross-entropy.
§.§.§ Decision making
We aggregate the output probability from each classifier model in the ensemble to convert the continuous probability to binary prediction:
f:=1[∑_z=1^ZM_z(x=[F_(n-1)T+1,···,F_nT])/Z >P_th],
where 1[·] is an indicator function, M_z donates the z^th classifier model in the ensemble, x donates the input vector, and P_th denotes the probability threshold, which is 50% in our work. Z donates the number of classifiers in the ensemble model.
§ EXPERIMENTS AND RESULTS
We first explicitly display our method's high success rate in detecting incipient slip, including offline and online scenarios. Then, we illustrate the practical benefits of our approach by showcasing its ability to stabilize an insecure robotic grasp in a number of practical gripping tasks.
§.§ Offline evaluation
The entire dataset is randomly split into two subsets: a training set (∼80% of the entire dataset, comprising 160 data sequences of slip event and 23 data sequences of stop event) for model training, and a test set (∼20% of the entire dataset, consisting of 40 data sequences of slip event and 5 data sequences of stop event) for model evaluation. Both subsets are expanded through the symmetry-based augmentation method described in Section <ref>, resulting in a five-fold increase in the size of the training set and test set.
Fig. <ref> displays two examples comparing the incipient slip detection results over slip and stop events. As observed, the algorithm's confidence in labeling incipient slip increases rapidly as incipient slip starts and decreases as it progresses toward gross slip. In comparison, the probability in the stop case fluctuates slightly but remains well below the threshold.
We define an incipient slip detection as successful if it occurs within a 0.3 second window preceding the true labeled time point of incipient slip (to accommodate the error of the ground truth) and prior to the occurrence of the gross slip. For the stop event, a successful estimation is defined as a classification of the entire sequence as other.
Fig. <ref> presents the confusion matrix, displaying the final classification results over the entire test set; our algorithm achieves an overall success rate of ∼95.6%. The results also demonstrate its effectiveness in differentiating between the slip and stop events; this indicates that our algorithm is not simply detecting the changes in the force and yank of the pillars, as mentioned earlier in Section <ref>.
Our algorithm can effectively detect incipient slip in its early stages.
In Fig. <ref>, we present the latency between the moment of incipient slip detected by the algorithm and the ground truth onset of incipient slip. It is evident that, on average, incipient slip can be detected within 10 ms of its initiation.
§.§ Online evaluation
In the online evaluation stage, we utilized the full data set for training the final deployed model. Again, to increase the amount of training data, we applied both symmetry-based (see Section <ref>) and advanced data augmentation (see Section <ref>) techniques, resulting in a five-fold increase in data amount (1140 data sequences).
The online evaluation was performed on six everyday objects, depicted in Fig. <ref>. We include objects of varying surface materials, curvatures, and hardness to ensure a broad range of conditions are represented in our results.
§.§.§ Validating incipient slip detections
We cannot easily validate incipient slip occurrences for everyday objects as we cannot independently monitor individual pillar contacts. Hence, we choose to perform the online evaluation based on following well-founded assumptions. The incipient slip detection is considered successful if it can be detected at any time-point between the time when the robot's movement begins (T_m) and the time when gross slip occurs (T_g); the criterion for determining the occurrence of gross slip has been arbitrarily defined as the occurrence of relative translational movement greater than 2 mm or relative rotational movement exceeding 2^∘ between the object and the robot's frame of reference.
To induce a slip, the gripper first grips the object with a constant force. Then the robot moves the gripper downwards towards a rigid and stationary table surface, eliciting the slip between the sensor attached to the gripper tip and the object. In each trial, the gripping force is selected from a range of 8 N to 30 N. The robot movement can be either translational, rotational or a combination of translational and rotational. The velocity (v) and acceleration (a) of the robot movement have three different levels: low (v = 4 mm.s^-1, a = 10 mm.s^-2), medium (v = 10 mm.s^-1, a = 50 mm.s^-2), and high (v = 40 mm.s^-1, a = 100 mm.s^-2). All robot movements were performed using the built-in movel function of the UR script. The tool center position and orientation are obtained using the built-in getl function of the UR robot. This function employs forward kinematics calculations based on the read joint angles.
In accordance with the offline evaluation, control trails are also conducted here for each v and a combination and movement type. The purpose is to validate that the identified behavior is indeed the incipient slip, rather the event with similar pattern like the stop event we mentioned above. The control data involves lifting the robot arm while maintaining a secure grip using a pre-determined grip force that is sufficient to prevent any slippage. As a result, when lifting an object, the pillars in contact undergo downward deformation due to the force of gravity; subsequently, once the object is securely held by the gripper and remains relatively motionless, these pillars will remain stationary. Here, for the sake of convenient explanation, we will also refer to this event as stop, and we label the sequence as other. To ensure a fair experiment, we add extra weight to lightweight objects to enhance their downward motion when being lifted, aiming to make the pattern of the output data sequence more like a slip event. In total, our experiment consisted of 216 trials, including 162 sequences of slip event (6 objects × 3 movements × 3 forces × 3 velocity/acceleration combinations) and 54 sequences of stop event (6 objects × 3 movements × 1 force × 3 velocity/acceleration combinations).
Fig. <ref> illustrates the final validation results. Fig. <ref> shows a confusion matrix, highlighting the high success rate (∼96.8%) of our method in detecting incipient slip and its ability to differentiate between slip and stop events. Fig. <ref> demonstrates that our algorithm can detect incipient slip almost immediately upon the initiation of the movement that induces slip, with a normalized displacement D_norm range of 0.2 - 0.4, within which the incipient slip can be detected (refer to the caption for the definition of D_norm). These results provide comprehensive validation of the effectiveness of our approach in detecting incipient slipping in real-world gripping tasks.
§.§.§ Ablation study
This study aims to showcase the effectiveness of our advanced augmentation method in bridging the domain gap between the idealized data collected with the hexapod and more realistic data encountered with the robotic gripper. To accomplish this, we employed the model training approach described in Section <ref>. However, instead of splitting the data into separate train and test sets, we trained the model using the entire dataset here, given the different objective. Subsequently, we conducted online gripping experiments, as described in Section <ref>, using this trained model. Our findings, as illustrated in Fig. <ref>, indicate that the model trained without our advanced augmentation method exhibits a notably high false positive rate in the subsequent online gripping task when compared to the results shown in Fig. <ref> where the model was trained using our advanced augmentation method. In other words, the model trained without our advanced augmentation is unable to effectively distinguish patterns between slip and stop events. As a result, it incorrectly detects incipient slip in many stop events.
§.§ Grasp stabilization after incipient slip detection
This experiment aims to show the benefit of using our incipient slip detection method in practical gripping tasks. This involve lifting the robot arm while gripping the object with a pre-determined small force to ensure that slip occurs. We applied our incipient slip detection method and adjusted the grip when incipient slip was first detected to prevent the object from slipping further. In this experiment, we simulate two common scenarios that can trigger slips. The first involves gripping an object at its center of gravity with insufficient force and lifting it, causing a translational slip between the gripper and the object. The second involves gripping an object away from its center of gravity and lifting it, where rotational slip is likely to occur. We implemented a simple grip force adaptation that responds to incipient slip detection as follows: if incipient slip is detected, the robot immediately stops, and the gripper applies a pre-determined secure force to the object. The objects used in the experiment are the same as those shown in Fig. <ref>. The experiment was conducted 36 times (6 objects × 2 scenarios (translation or rotation) × 3 repetitions). We fix ArUco markers on the objects and gripper and use Python OpenCV to track the positions and orientations of all.
We report the results in Table <ref>, which demonstrate the quickly and effective detection of incipient slip using our algorithm. On average, our algorithm can timely detect incipient slip and prevent the object from slipping when the relative translation between the object and the gripper reaches 2.5 mm and the relative rotation reaches 1.9 ^∘. Our algorithm showcases its ability to facilitate timely corrective action, preventing object falls; a demonstration video can be seen at our project website given in the abstract.
§ DISCUSSION
Our developed algorithm enables the NN to effectively learn the incipient slip pattern from offline data and demonstrates high accuracy in both offline and online test sets. Furthermore, our algorithm enhances the security of robotic gripping.
Compared to previous related works <cit.>, our algorithm offers several advantages. Firstly, our incipient slip detection algorithm incorporates a data-driven learning-based approach, minimizing the need for extensive human involvement in investigating the complex patterns of incipient slip. Secondly, the improved robustness of our algorithm enables the NN to effectively adapt to diverse domains with various types of PapillArray sensors and robotic gripping systems, despite being trained solely on data lacking heterogeneity. Therefore, our algorithm is more practical and possesses greater potential for maximizing the utilization of valuable tactile data in real-world scenarios. Thirdly, our algorithm has the ability to distinguish between incipient slip and a closely related tactile pattern that we refer to as a stop event. Notably, previous related work <cit.> has not adequately considered or addressed the stop event; however, our investigation has revealed the importance of including stop events when developing incipient slip detection algorithms due to their similar patterns but entirely different consequences.
There are limitations to our work that need consideration. Firstly, the incipient slip detection could be improved by transitioning from a binary signal to a continuous warning signal. For instance, if incipient slip is detected in a small portion of the contact surface, the remaining area may still possess sufficient fraction to prevent significant slippage. In such cases, the warning level of incipient slip is low and corrective actions may not be necessary. Conversely, if a significant portion of the contact surface exhibits incipient slip, the warning level should escalate and it becomes important to for appropriate corrective actions. Moreover, our current choice of force adaptation method for reacting to incipient slip falls short when compared to the state-of-the-art gripping control work <cit.>. However, it is important to note that force adjustment is not the primary focus of our research in this paper, which is focused on improving the incipient slip detection. In future work, we will develop a more sophisticated force adaptation technique that incorporates our incipient slip detection method.
§ CONCLUSION
In conclusion, this paper presents an incipient slip detection method that employs deep learning and several data augmentation techniques to improve the robustness of the trained NN. Our method is highly effective and reaches the state-of-art performance, it enable a single pre-trained NN model to be applied across various domains and tasks. In addition, our method has the potential to be extended to other approaches that use compliant tactile sensors.
To train the NN parameters, we use stochastic gradient descent with a momentum of 0.95 and a learning rate of 10^-3, with a batch size of 512. We also incorporate a weight decay of 10^-3 using L_2 regularization during training. The encoder NN consists of one hidden layer with 1024 units, and the output dimension is 128. The GRU cell has a hidden layer dimension of 128. The predictor network comprises two hidden layers with 256 and 128 units, respectively. To all hidden layers, we apply rectified non-linearity <cit.> and batch normalization <cit.>.
We implement our NN using PyTorch (Version 1.12.1, Meta, USA). All our experiments are conducted on a PC with an Intel 7-10875H CPU and an NVIDIA 2060 GPU. During the online evaluation stage, e utilise ROS <cit.> to facilitate communication between various components in our system.
ieeetr
|
http://arxiv.org/abs/2307.04176v1 | 20230709135730 | Electron-phonon driven charge density wave in CuTe | [
"Marco Campetella",
"Giovanni Marini",
"Jianqiang Sky Zhou",
"Matteo Calandra"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cond-mat.str-el"
] |
[][email protected]
Consiglio Nazionale Delle Ricerche (CNR) SPIN, 00133 Rome, Italy
Dipartimento di Biotecnologie, Chimica e Farmacia, Università di Siena, Via Aldo Moro 2, Siena, I-53100, Italy
Graphene Labs, Fondazione Istituto Italiano di Tecnologia, Via Morego, I-16163 Genova, Italy
Sorbonne Université, CNRS, Institut des Nanosciences de Paris, UMR7588, F-75252, Paris, France
[][email protected]
Department of Physics, University of Trento, Via Sommarive 14, 38123 Povo, Italy
Graphene Labs, Fondazione Istituto Italiano di Tecnologia, Via Morego, I-16163 Genova, Italy
Sorbonne Université, CNRS, Institut des Nanosciences de Paris, UMR7588, F-75252, Paris, France
The compound CuTe (vulcanite) undergoes a quasi one dimensional charge density wave (CDW) at T< T_CDW=335 K with a 5×1×2 periodicity. The mechanism at its origin is debated. Several theoretical works claimed that semilocal functionals are unable to describe its occurrence and ascribed its formation only to strong electron-electron interaction. Moreover, the possible role of quantum anharmonicity has not been addressed. Here, by performing quantum anharmonic calculations, we show that semilocal functionals correctly describe the occurrence of a CDW in CuTe if ultradense electron momentum grids allowing for small electronic temperatures are used. The distortion is driven by the perfect nesting among 1D Fermi surface sheets extending in the k_y direction. Quantum anharmonic effects are important and tend to suppress both the distortion and T_CDW. The quantum anharmonic structural minimization of the CDW phase in the generalized gradient approximation leads, however, to distorted Te-Te bond lengths in the low temperature phase that are 21% of the experimental ones at T=20 K. This suggests that, even if the electron-electron interaction is not crucial for the mechanism of CDW formation, it is relevant to accurately describe the structural data for the low-T phase. We assess the effect of correlation on the CDW by using the DFT+U+V approximation with parameters calculated from first principles. We find that correlation enhances
the Te-Te distortion, T_CDW
and the total energy gain by the distortion.
Electron-phonon driven charge density wave in CuTe.
Matteo Calandra
Received 27 February 2023; accepted 23 May 2023
===================================================
§ INTRODUCTION
One dimensional (1D) and quasi 1D crystal are prone to charge density wave (CDW) instabilities due to their low dimensional, often point like, Fermi surfaces and the resulting divergence in the charge response. This is what is predicted by the Landau-Peierls theory that is characterized by three main features: (i) the transition is second order and manifests itself via a soft phonon going to zero at the transition temperature (T_CDW), (ii) the occurrence of an order parameter (the phonon displacement induced by the CDW) that is non-zero only for T<T_CDW and decreases by increasing temperature until it becomes zero at the transition and, finally, (iii) the opening of a gap in the electronic excitation spectrum whose magnitude should be of the same order of the total energy gain by the CDW distortion. The common belief is that most one dimensional systems are globally well modeled by the Landau-Peierls model.
The Landau-Peierls model is, however, incomplete as it does not account for quantum anharmonicity (i.e. the quantum nature of the ions and the anharmonicity in the ionic potential) that is crucially important for light atoms and in proximity of a second order structural instability. Moreover, it neglects strong electron-electron interaction. Recently the archetypal case of carbyne in vacuum was studied with a variety of density functional theory (DFT) and manybody approaches accounting for non-perturbative quantum anharmonicity and electron-electron interaction<cit.>. It was shown that the total energy gain by the distortion is two order of magnitudes smaller (25 meV) than the distortion-induced electronic gap (3 eV) and, most surprisingly, the order parameter increases with increasing temperature for T<T_CDW. This pathology of the carbon chain in vacuum is in part related to the light mass of the carbon atoms and its quantum nature and does not invalidate the applicability of Landau-Peierls theory in a broader spectrum of materials. Still, it implies that there could be remarkable exceptions to this theory, either because quantum anharmonic effects and the electron-electron interaction are crucially important or because the Fermi surface deviates from what expected for a 1D system. The applicability of the Landau-Peierls picture to higher dimensional material has been questioned in several works<cit.>.
Recently, the layered material CuTe (vulcanite) received considerable interest<cit.>. In this compound
Te chains run above and below a puckered
copper layer so that each copper atom has a distorted
tetrahedral environment (see Fig.<ref> and Ref. stolze2013cute).
At temperatures lower than T_CDW = 335 K,
CuTe undergoes a 5× 1× 2 CDW <cit.>. The distortion involves a Te-Te bond alternation with phonon displacements as shown in Fig. <ref>. The superstructure is
visible in ARPES data as a (partial) gapping of the Fermi surface<cit.>, the maximum size of the gap being approximately 192 meV<cit.>. ARPES data<cit.> in the high-T phase show the occurrence of quasi 1D Fermi surface sheets extending along the k_y direction and perfectly nested along k_x.
Resistivity <cit.> and optical<cit.> data confirm the quasi 1D character of the CDW as the temperature dependence of the resistivity along the b axis shows the classical behaviour of a metallic system and is not affected by the CDW, while the resistivity along the a axis ( i.e. the CDW direction) displays a marked hump at T=T_CDW. Interestingly, the Hall coefficient is enhanced by approximately a factor of two across the CDW transition (larger values of R_H are in the distorted phase) suggesting a carrier reduction but an incomplete gapping of the Fermi surface in the low-T one<cit.>. The constant pressure specific heat displays a marked jump at the transition albeit with no hysteresis, confirming the second order nature of the transition<cit.>.
Thus, this experimental picture seems to point to a second order Landau-Peierls transition mostly due to the perfect nesting of the quasi 1D Fermi surface sheets.
However, three recent theoretical works <cit.> calculated the harmonic phonon dispersion of the high-T phase of CuTe with semilocal functionals and found no tendency toward CDW (i.e. no imaginary phonon frequencies).
The difficulty in reproducing the occurrence of the CDW with semilocal functionals led some authors to speculate that the CDW in this system is exclusively driven by electron-electron correlation<cit.>. Indeed, by performing DFT+U calculations, the authors of Ref. <cit.> showed that very large values of U can induce a structural instability comparable with the experimental one. However the considered value for the Hubbard parameter (U=9 eV) is extremely large and not calculated ab initio. Furthermore, the role of anharmonicity was not discussed. Recently, a careful study of collective excitations in CuTe <cit.> pointed out the possible existence of acoustic plasmons, making the study of this compound even more appealing. Finally, CuTe has been reported to support a superconducting state at high pressures <cit.>
In this work we investigate the electronic, structural and vibrational properties of CuTe within density functional perturbation theory. We include the effect of non-perturbative quantum anharmonicity by using the Stochastic Self-Consistent Harmonic Approximation <cit.>.
We demonstrate that, contrary to what claimed in all published theoretical papers and in agreement with the experimental picture, the CDW is mostly driven by the electron-phonon coupling and Fermi surface nesting with relevant corrections related to quantum anharmonicity. Electron-electron interactions are not negligible but are not the driving force for the CDW transition: they are probably required to accurately describe the structural properties of the low-T phase.
The paper is structured as follows. In Sec. <ref> we give the technical details of the first principles calculations, in Sec. <ref> we address the electronic structure and the mechanism for CDW formation, in Sec. <ref> we describe the structural properties of the CDW phase and in Sec. <ref> we draw the main conclusions.
§ TECHNICAL DETAILS
Density-functional theory (DFT) and density functional perturbation theory (DFPT) calculations are carried out using the Quantum ESPRESSO package<cit.>. We use the generalized gradient approximation (GGA) in the Perdew-Burke-Ernzerhof (PBE)<cit.> parametrization.
The experimental measured lattice parameters for bulk CuTe a = 3.151 Å, b = 4.089 Å and c = 6.950 Å are adopted in all calculations, while we perform structural optimization of internal coordinates.
We use ultrasoft pseudopotentials<cit.> and a
50 Ry plane wave energy cutoff for the kinetic energy (500 Ry for the charge density).
As phonon dispersion curves in one dimensional materials are extremely sensitive to the k-point sampling and to the electronic temperature (T_e) used in the calculation, we perform extremely accurate convergence tests of the phonon frequency at the CDW phonon momentum q_CDW=[0.4,0,0.5] (square brakets means that the components are given with respect to the basis vectors of the reciprocal lattice).
In more details,
the harmonic phonon dispersion is calculated using Γ centered k-points meshes. We considered grids of the kind k_x×16× 4 with k_x values up to 150. We then calculate the phonon frequency for each mesh as a function of the Fermi temperature used in the calculations. The results of these calculations are
explained in more details in Sec. <ref>. At the end of these tests we adopted an 80×16×4 electron-momentum grid in the 1×1×1 cell and an electronic temperature T_e=200 K (Fermi Dirac smearing).
When using supercells, the k-points meshes are then rescaled according to the size of the supercells (e.g., we use a 8×16×2 k-points mesh on a 10×1×2 cell and a 16×16×4 k-points mesh on a 5×1×1 cell).
The quantum anharmonic calculation is performed with the Stochastic Self Consistent Harmonic Approximation (SSCHA)<cit.>.
The SSCHA is a stochastic variational technique that allows to access the non-perturbative quantum anharmonic free energy and its Hessian with respect to the atomic positions <cit.> (i.e., the phonon spectrum). The SSCHA technique requires the evaluation of forces in supercells with atoms displaced from their equilibrium positions following a suitably chosen Gaussian distribution. The forces can be calculated by using any force engine. In this work
we used DFT with the PBE functional for the force calculation.
We calculate the forces using the Quantum ESPRESSO package and supercells ranging from 5× 1×2 to 10× 1×2. In a
10×1×2 supercell (80 atoms) of the high-T phase structure the number of DFT force calculations needed to converge the free energy is
of the order of 800, while approximately 2000 forces are needed to converge the free energy Hessian at T = 0 K. The computational effort is substantial given the dense electron-momentum grids.
We determine the nature and the critical temperature T_CDW of the CDW transition by monitoring the positional free energy Hessian (second derivative of the free energy with respect to the atomic positions)<cit.>, as dictated by Landau theory of phase transitions.
§ HIGH-T PHASE
We first calculate the electronic structure of the high-T phase and compare the Fermi surface with that measured in ARPES (see Fig. <ref>).
Each panel refers to constant energy cuts from E_F to -0.5 eV from E_F (the value of the constant energy with respect to E_F is shown on the top of each panel) in the (k_x,k_y) plane and for k_z=0. Experimental ARPES data from Ref. zhang2018evidence
are also included for reference.
Globally the agreement between the experimental and measured constant energy scans is excellent. We are able to recover both the pockets extending along the k_x direction and the quasi-1D line segments along the k_y direction. These last dispersionless bands extending only along the k_y direction are clear fingerprints of the 1D physics in vulcanite.
The sharpness of these 1D Fermi surface portions suggests that a remarkably dense k-points mesh along the k_x direction may be required in order to correctly sample their contribution to the phonon dispersion at phonon momentum q= q_CDW.
We explicitly verified this point by performing careful convergence of the lowest energy phonon frequency at q= q_CDW as a function of k_x points and Fermi-Dirac electronic temperature T_e. The results are shown in Fig. <ref> (top) and unambiguously show that grids having k_x≈ 80 and electronic temperatures comparable to T_CDW must be used to see the CDW. By adopting an electronic temperature T_e=200 K and a k-point mesh of 80× 16× 4 we find converged results. As it can be seen, the lowest phonon frequency at q= q_CDW is imaginary and not positive as it has been reported in all published theoretical papers in the field<cit.>.
In these works, the difficulty in performing Brillouin zone sampling for CuTe has been completely overlooked. Much coarser grids, such as 30×16× 4, and, most likely, larger electronic temperatures have been used. We point out that the technical details reported in Refs. <cit.> are incomplete and the calculations are not reproducible (as an example in Refs. <cit.> the value of the electronic temperature is not reported).
From Fig. <ref>, it is also clear that by using the PBE semilocal functional and simply increasing the electronic temperature, i.e. neglecting quantum anharmonicity, the CDW critical temperature is in the range T_CDW=700-750 K. This is only a factor of two higher than the experimental one, suggesting that Fermi surface nesting is an important effect in this system.
The CuTe harmonic phonon dispersion is reported in Fig. <ref> (bottom panel). As it can be seen there are two sharp dynamical instabilities corresponding to the modulations q_CDW=[0.4,0,0.5] and
q_CDW^'=[0.4,0,0.0]. The planar instability at q_CDW^' leads to slightly more unstable phonons. However small changes in the simulations details (structural parameters, functional used,...) lead to a more unstable mode at q_CDW. These two instabilities are then almost degenerate. The local character in momentum space of the instability points at a crucial role of the Fermi surface.
In order to confirm this point we calculate the electron-phonon contribution to the phonon linewidth (FWMH), namely
γ_ qν=4 πω_ qν/N_k∑_ k,n,m|g_ k n, k+ q m^ν|^2δ(ϵ_ k n-E_F)δ(ϵ_ k+ q m-E_F)
where ω_ qν are the harmonic phonon frequencies, ϵ_ kn are the Kohn-Sham energy bands, E_F is the Fermi level and g_ k n, k+ q m^ν is the electron-phonon matrix element. We calculate γ_ qν for the lowest energy phonon mode along the ZU direction. The results are shown in Fig. <ref> and shows a strong enhancement of the phonon linewidth at the CDW wavevector mostly due to Fermi surface nesting. At the harmonic level and by using the PBE functional the instability is then electron-phonon driven.
The phonon patterns connected with these two instabilities are very similar in the CuTe ab-plane.
The only difference is that the distortion of momentum q= q_CDW shifts two parallel CuTe planes in antiphase.
The calculation of the energy gain obtained by displacing the ions along the directions of the imaginary phonon mode is approximately 1.29 meV per Cu atom in both cases.
The occurrence of imaginary phonon frequencies at the harmonic level is, however, not enough to demonstrate the presence of a CDW as quantum-anharmonic terms in the potential could remove the instability. In order to explore this possibility,
we investigate quantum anharmonic effects within the Stochastic Self-Consistent Harmonic Approximation (SSCHA)<cit.> that has been proven to be very effective in describing anharmonic quantum effects in a plethora of different systems<cit.>.
The quantum anharmonic phonon dispersion is obtained within the SSCHA by calculating the positional free energy (F) Hessian as a function of temperature. We define the temperature dependent dynamical matrix as:
D = M^-1/2∂^2 F/∂R^2| _R_eqM^-1/2
where M is the matrix of the ionic masses M_a with M_ab = δ_ab M_a and R is a cumulative variable for all the ionic positions (see Ref. bianco2017second for a detailed explanation). By Fourier transforming the matrix D and by diagonalizing it, we obtain as eigenvalues the squared quantum anharmonic phonon frequencies.
We perform the SSCHA calculation on a 10×1×2 supercell. The results are shown in Fig. <ref> (T=0, bottom panel) and in Fig. <ref> (top panel) as a function of temperature. At T=0 the main effect of quantum anharmonicity is an hardening of the CDW mode. However, the mode still remains imaginary signalling that at T=0 quantum anharmonicitiy does not remove the CDW.
The temperature dependence of the quantum anharmonic phonon dispersion is shown in Fig. <ref> (top panel). At T=200 K the phonon dispersion does not display any dynamical instability, meaning that the calculation is already in the undistorted high-T phase.
By plotting the square of the lowest phonon frequency as a function of temperature in Fig. <ref> (bottom panel) we estimate T_CDW≈ 60 K. This critical temperature is approximately 5.6 times smaller than the real one. As the transition occurs only via a change in the quantum free energy Hessian that becomes negative at the transition along the CDW pattern, we find that in our calculation the transition is purely second order, in agreement with experimental data <cit.>.
Two effects may be at the origin of the underestimation of T_CDW. The first one is that the supercell used in the calculation could be too small. However, we have carefully monitored the value of the phonon frequency at q= q_CDW and q= q_CDW^' for supercells of sizes 5× 1× 1, 5× 1× 2, 10× 1× 2
finding that the quantum anharmonic phonon frequency varies less than 1 cm^-1. This excludes that this reduced T_CDW is due to a finite supercell effect.
The second and most probable reason causing the underestimation of T_CDW is the treatment of the exchange and correlation used. In order to better understand this point we examine more in details the low temperature phase.
§ LOW TEMPERATURE CDW PHASE.
In order to study the structural and electronic properties of the CDW phase, we consider two supercells, the 5× 1× 1 and the 5×1×2, corresponding to instabilities at q= q_CDW^' and
q= q_CDW, respectively.
We first displace the atoms along the unstable phonon patterns and then perform structural optimization (we minimize the classical Born-Oppenheimer forces). The results of the optimization are shown in Tab. <ref> . As it can be seen, the structural distortion of the Te atoms is in good agreement with experiments at T=20 K, although the distortion is somewhat underestimated. Both the 5× 1× 1 and the 5×1×2 give comparable 1D distortion.
The fact that, as we have seen, quantum anharmonic effects are important in this system, as they reduce T_CDW more than a factor of 10 with respect to the harmonic calculation, suggests that the inclusion of quantum anharmonicity will reduce the distortion. As the quantum anharmonic minimization in this system is very expensive due to the very dense mesh needed, we perform the quantum anharmonic structural optimization with the SSCHA only in the 5× 1× 1 supercell. This is justified as we know that the two supercells lead to practically identical distortion of the Te-Te bond along the CDW direction.
The results of the quantum anharmonic minimization are again shown in Tab. <ref>. As expected the distortion is substantially reduced and the quantum anharmonic distortion is approximately 41% (0.21%) of the experimental one at T=295 K (T=20K).
As in low dimensional systems it is well known that the exchange interaction is not completely screened and semilocal functional usually underestimate the distortion<cit.>, this has to be somewhat expected.
Finally, for completeness, we address the pseudogap feature detected in ARPES<cit.> in the CDW phase. Previous calculations already showed that this feature can be fairly well reproduced if the distortion is large enough <cit.>. As it is typical for a Peierls distortion, the magnitude of the gap opening is linearly related to the CDW distortion.
This means that, as the magnitude of the distortion depends on the exchange and correlation approximation used in the calculation, the size of the pseudogap also will.
We then consider the experimental distorted structure on a 5×1× 2 supercell, calculate the electronic structure and unfold it <cit.> on the CuTe unit cell. A finite Lorentzian linewidth of 20 meV is added to the theoretical unfolded band structure in order to simulated the experimental broadening. The comparison with ARPES data from Ref. <cit.> is also shown. Our calculations reproduce the opening of the CDW with a pseudogap that is of the same magnitude of the experimental one. Small differences occur on the exact value of the experimental gap that are probably due in part to the ARPES matrix element, not explicitly considered in our calculation.
§ ESTIMATION OF CORRELATION EFFECTS VIA DFT+U+V
In order to account for correlation effects
on the electronic structure and the structural properties on equal footing, we model the system in the DFT+U+V formalism within the rotationally invariant scheme first proposed by Dudarev et al. in Ref. PhysRevB.57.1505. Following Ref. Leiria_Campo_2010, the DFT energy functional, E_DFT, is corrected to include on-site and inter-atomic interactions, by adding the term
E_UV = ∑_IU^I/2Tr[𝐧^IIσ (1 - 𝐧^IIσ)]
- ∑^*_I,J,σV^IJ/2Tr[𝐧^IJσ𝐧^JIσ]
where I and J represent atomic sites, the star in the sum operator denotes that for each atom I, J covers all its neighbors up to a given distance, while the on-site parameter U^I, the inter-site V^I,J and the occupation matrix 𝐧^IJσ are defined as in Ref. Leiria_Campo_2010.
The new total energy E_DFT+U+V is written as
E_DFT+U+V = E_DFT + E_UV
The on-site and intersite parameter U^I and V^IJ parameters are calculated from first principles, using the linear response method introduced by Timrov et al. in Refs. PhysRevB.98.085127,PhysRevB.103.045141.
We use the atomic wavefunctions (3d for Cu and 5p for Te) read from the pseudopotentials to build the Hubbard projectors. In the calculation, all the neighboring atoms up to the fourth shell were considered. A 8×4×1 momenta grid was necessary to converge the U and V values within 0.1 eV.
The calculated inter and on-site Hubbard values for CuTe in the normal phase are reported in Tab. <ref>.
We find large on-site repulsion parameter of 16.71 eV and 4.32 eV for Cu(3d) and Te(5p) sites, respectively. Furthermore, we observe that interatomic Cu-Te interactions are negligible, while a sizable first-neighbor Te-Te repulsive interaction (0.97 eV) exists. The inclusion of Hubbard parameters importantly modifies the electronic structure, resulting in the Fermi surface shown in Fig.<ref> (top panel, yellow lines). By looking at the comparison between the ARPES and the Fermi surface predicted by first principles calculations employing DFT+U+V, we conclude that the first principles on-site and inter-site parameters are not substantially improving the agreement between the theory and the experiment, especially in regard to the electron pocket around the Γ point.
Finally, we calculate the energy gain in the charge-density wave phase with respect to the normal state with the inclusion of Hubbard parameters, and compare the results to the predictions given by PBE. The results are depicted in Fig. <ref>. We find that the inclusion of correlation effects enhances the CDW energy gain by more than one order of magnitude, i.e. from 1.29 meV /f.u. in PBE to 32 meV/f.u. if both inter- and on-site parameters are included in the calculation, while we obtain an energy gain of 17 meV/.f.u. if only on-site terms on Cu and Te are included in the calculation. Correspondingly, the predicted structural distortion due to the charge-density wave is notably enhanced, with a maximum Te-Te dimerization Δ_ Te of the order of 0.9 Å , overestimating the measured values of Ref.stolze2013cute of a factor ≈ 2.4 at T=20 K. Moreover the free energy versus Δ profile becomes even more anharmonic, suggesting both an increase of T_CDW at the harmonic level as well as an enhancement of quantum anharmonic effects.
As it was already clear at the PBE level, the charge density wave temperature is the result of a delicate
compensation among the electron-phonon interaction (enhancing the tendency towards CDW) and anharmonicity (suppressing the CDW). Both effects are substantially enhanced by correlation effects and both effects are crucial and comparable in order. Within DFT+U+V at the harmonic level, we do indeed estimate T_ CDW as being as large as 6000 K, in stark disagreement with experiments, signalling once more the need of including anharmonicity to obtain results in better agreement with experiments.
§ CONCLUSION
In this work, by performing non perturbative quantum-anharmonic calculations, we studied the CDW formation in CuTe.
Contrary to all existing theoretical calculations in literature <cit.>, we find that semilocal functionals correctly describe the occurrence of CDW in this system.
Previous calculations where unable to describe the CDW instability most likely due to the use of a too large electronic temperature.
We find that the CDW is due to the almost perfect nesting among the quasi 1D Fermi surface sheets extending along the k_y direction resulting in a large electron-phonon interaction and a consequent phonon softening. Quantum anharmonicity reduces this softening but does not suppress the CDW at T=0.
Quantum anharmonic effects reduce the T_CDW by a factor of 10 with respect to the harmonic estimate based on the electronic temperature only.
The calculated T_CDW≈ 60 K, resulting from the combined effect of the electron-phonon interaction and anharmonicity, underestimates the experimental one by a factor ≈ 5.6. Similarly, the quantum anharmonic structural minimization of the CDW phase leads to distorted Te-Te bond lengths in the low temperature phase that are 40% smaller than the experimental ones. These two underestimations are related and suggest that, even if the electron-electron interaction is not crucial for the mechanism of CDW formation, it is relevant to accurately describe the structural data for the low-T phase.
In order to validate this statement we employ the DFT+U+V approximation with on-site and off-site Hubbard parameters calculated ab initio. Within this approximation, the CDW distortion is strongly enhanced and overestimate the experimental one by a factor 5.6. At the harmonic level T_CDW≈ 6000K, approximately 20 times larger than the experimental value. However, anharmonic effects also becomes substantially larger, underlying once more the need of including quantum anharmonic effects to obtain results in better agreement with experiments.
§ ACKNOWLEDGEMENTS
Co-funded by the European Union-NextGenerationEU, ICSC – Centro Nazionale di Ricerca in HPC, Big Data and Quantum Computing. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
We acknowledge the PRACE and CINECA award under the ISCRA initiative, for the availability of high performance computing resources and support. We acknowledge support from Seal of Excellence (SoE) fellowship promoted by University of Siena
elsarticle-num.bst
|
http://arxiv.org/abs/2307.03969v2 | 20230708125936 | Impact of noise on inverse design: The case of NMR spectra matching | [
"Dominik Lemm",
"Guido Falk von Rudorff",
"O. Anatole von Lilienfeld"
] | physics.chem-ph | [
"physics.chem-ph"
] |
University of Vienna, Faculty of Physics, Kolingasse 14-16, AT-1090 Vienna, Austria
University of Vienna, Vienna Doctoral School in Physics, Boltzmanngasse 5, AT-1090 Vienna, Austria
University Kassel, Department of Chemistry, Heinrich-Plett-Str.40, 34132 Kassel, Germany
[email protected]
Departments of Chemistry, Materials Science and Engineering, and Physics, University of Toronto, St. George Campus, Toronto, ON, Canada
Vector Institute for Artificial Intelligence, Toronto, ON, M5S 1M1, Canada
Machine Learning Group, Technische Universität Berlin and Institute for the Foundations of Learning and Data, 10587 Berlin, Germany
Despite its fundamental importance and widespread use for assessing reaction success in organic chemistry, deducing chemical structures from nuclear magnetic resonance (NMR) measurements has remained largely manual and time consuming. To keep up with the accelerated pace of automated synthesis in self driving laboratory settings, robust computational algorithms are needed to rapidly perform structure elucidations.
We analyse the effectiveness of solving the NMR spectra matching task encountered in this inverse structure elucidation problem by systematically constraining the chemical search space, and correspondingly reducing the ambiguity of the matching task.
Numerical evidence collected for the twenty most common stoichiometries in the QM9-NMR data base indicate systematic trends of more permissible machine learning prediction errors in constrained search spaces.
Results suggest that compounds with multiple heteroatoms are harder to characterize than others.
Extending QM9 by ∼10 times more constitutional isomers with 3D structures generated by Surge, ETKDG and CREST,
we used ML models of chemical shifts trained on the QM9-NMR data to test the spectra matching algorithms.
Combining both and shifts in the matching process suggests twice as permissible machine learning prediction errors than for matching based on shifts alone.
Performance curves demonstrate that reducing ambiguity and search space can decrease machine learning training data needs by orders of magnitude.
Impact of noise on inverse design: The case of NMR spectra matching
O. Anatole von Lilienfeld
August 12, 2023
===================================================================
§ INTRODUCTION
Current development times of novel molecular materials can span several decades from discovery to commercialization.
In order for humanity to react to global challenges, the digitization<cit.> of molecular and materials discovery aims to accelerate the process to a few years.
Long experiment times severely limit the coverage of the vastness of chemical space, making the development of self driving laboratories for autonomous robotics experimentation crucial for high throughput synthesis of novel compounds (Fig.<ref> a))<cit.>.
To keep the pace of automated synthesis, fast and reliable characterization of reaction products through spectroscopic methods is required, an often manual, time intense and possibly error prone task.
One of the most common methods to elucidate the structure of reaction products are nuclear magnetic resonance (NMR) experiments.<cit.>
Through relaxation of nuclear spins after alignment in a magnetic field, an NMR spectrum, characteristic of local atomic environments of a compound, i.e. functional groups, can be recorded.
In particular, and NMR experiments are routinely used by experimental chemists to identify the chemical structure or relevant groups just from the spectrum.
For larger compounds, however, the inverse problem of mapping spectrum to structure becomes increasingly difficult, ultimately requiring NMR of additional nuclei, stronger magnets, or more advanced two-dimensional NMR experiments<cit.>.
Computer-assisted structure elucidation algorithms aim to iteratively automatize the structure identification process<cit.>.
Current workflows include repeated predictions of chemical shifts for candidate structure inputs through empirical or ab initio methods<cit.>.
Albeit accurate even in condensed phase through use of plane-waves <cit.> or QM/MM setup <cit.>, the cost of density functional theory (DFT) calculations severely limits the number of candidate structures that can be tested, leaving the identification of unknown reaction products out of reach for all but the smallest search spaces.
Data driven machine learning models leveraging experimental or theoretical NMR databases<cit.> provide orders of magnitude of speedup over ab initio calculations, reaching 1-2 ppm mean-absolute-error (MAE) w.r.t. experiment or theory, respectively<cit.>.
However, while the stoichiometry of the reaction product is usually known, e.g. through prior mass spectrometry experiments, the number of possible constitutional isomers exhibits NP hard scaling in number of atoms, quickly spanning millions of valid molecular graphs already for molecules of modest size (Fig.<ref> b)).
As such, the inverse problem of inferring the molecular structure from an NMR spectrum still poses a major challenge even for rapid solvers.
Recent machine learning approaches tackle the inverse problem using a combination of graph generation and subsequent chemical shift predictions for candidate ranking<cit.>.
First explored by Jonas<cit.>, a Top-1 ranking with 57% reconstruction success-rate was achieved using deep imitation learning to predict bonds of molecular graphs.
Sridharan et al.<cit.> used online Monte Carlo tree search to build molecular graphs resulting in a similar Top-1 ranking of 57.2%.
Huang et al.<cit.> relied on substructure predictions from which complete graphs can be constructed, reaching 67.4% Top-1 accuracy by ranking substructure profiles instead of shifts.
A commonality between all algorithms is the subsequent ranking of candidates using spectra matching or other heuristics.
Consequently, even though the correct query compound could be detected early, similar candidates might be ranked higher, making the ranking process as critical as the candidate search itself.
In this work, we analyse the effectiveness of the NMR spectra matching task encountered in the inverse structure elucidation problem.
As stagnating improvements<cit.> in chemical shift predictions due to limited public NMR data aggravate candidate rankings, results suggest that both the prediction error of machine learning models and the number of possible candidates are crucial factors for elucidation success.
By systematically controlling the size of chemical search space and accuracy of chemical shifts, we find that higher error levels become permissible in constrained search spaces.
Moreover, results indicate that increasing the uniqueness through including both and shifts in the matching process, rather than relying on a single type of shift, significantly reduces ambiguity and enhances error tolerance.
To evaluate the spectra matching task throughout chemical compound space, we systematically control the accuracy of 1D and chemical shifts of the 20 most common stoichiometries in QM9-NMR<cit.> by applying distinct levels of Gaussian white noise.
Note that while we focus on DFT based 1D NMR in this work, future studies could include experimental data and 2D NMR information.
Comparisons amongst stoichiometries suggest that chemical spaces with increasing amounts of heteroatoms and number of constitutional isomers are harder to characterize than others.
To test the spectra matching method on a large search space, we extended QM9-NMR to 56k C_7O_2H_10 constitutional isomers.
Controlling the chemical shift accuracy through machine learning models trained at increasing training set sizes, performance curves again indicate a trade-off between search space and accuracy.
Hence, as less accurate shift predictions become useful, results show that machine learning training data needs can be reduced by multiple orders of magnitude.
§ THEORY & METHODS
§.§ NMR Spectra Matching
Consider a query or spectrum with a set of N possible candidate constitutional isomer spectra.
We chose the squared euclidean distance as a metric to rank candidate spectra against the query spectrum (see SI Fig.3 for comparison against other metrics):
d(δ_q, δ_i) = ∑_j=1^n (δ_q,j - δ_i,j)^2,
with δ being a sorted spectrum of n chemical shifts (or ), q being the query, i being the i-th of N candidates, and j being the j-th chemical shift in a spectrum, respectively.
To use both and shifts simultaneously for spectra matching, a total distance can be calculated as follows:
d_combined = d(δ^13C_q, δ^13C_i) + γ· d(δ^1H_q, δ^1H_i),
with γ=64 being a scaling factor determined via cross-validation (see SI Fig.1) to ensure similar weighting.
Final rankings are obtained by sorting all candidates by distance.
The Top-1 accuracy is calculated as the proportion of queries correctly ranked as the closest spectrum, respectively.
§.§ Elucidation performance curves
To analyse the spectra matching elucidation accuracy, we systematically control the number of possible candidates N and the accuracy of chemical shifts, respectively.
For each constitutional isomer set, we choose 10% as queries and 90% as search pool, respectively.
Next, we randomly sample N spectra from the search pool, including the query spectrum.
Each sample size is drawn ten times and the Top-1 accuracy averaged across all runs.
To control the accuracy of chemical shifts, we apply Gaussian white noise (up to 1 or 10 σ for and , respectively) or use the machine learning error as a function of training set size (c.f. SI Fig.5 for learning curves).
For each N and chemical shift accuracy, results are presented as elucidation performance curves (c.f. Fig.<ref> a-b)), showing the elucidation success as a function of chemical shift accuracy in terms of mean absolute error (MAE).
§.§ Chemical Shift Prediction
We relied on kernel ridge regression (KRR) for machine learning and chemical shifts as presented in Ref.<cit.>.
We use a Laplacian kernel and the local atomic Faber-Christensen-Huang-Lilienfeld (FCHL19<cit.>) representation with a radial cutoff<cit.> of 4 .
The kernel width and regularization coefficient have been determined through 10-fold cross-validation on a subset of 10'000 chemical shifts of the training set.
§.§ Data
The QM9-NMR<cit.> dataset was used in this work, containing 130'831 small molecules up to nine heavy atoms (CONF) with chemical shieldings at the mPW1PW91/6-311+G(2d,p)-level of theory.
We used the 20 most common stoichiometries (Fig.<ref> b)), having a minimum of 1.7k constitutional isomers available in the dataset.
To extend the QM9-NMR C_7O_2H_10 constitutional isomers space, we generated 54'641 SMILES using Surge<cit.>.
3D structures have been generated using ETKDG<cit.> and CREST<cit.> using GFN2-xTB/GFN-FF.
Adding the structures to QM9, a total pool size of 56.95k C_7O_2H_10 isomers was obtained.
For the training of chemical shift machine learning models, we selected C_8OH_12, C_8OH_10, C_8OH_14, C_7O_2H_8 and C_7O_2H_12 constitutional isomers, yielding a total of 143k and 214k training points, respectively.
§ RESULTS & DISCUSSION
§.§ Spectra matching accuracy with synthetic noise
To analyse the influence of noise and number of candidates on the elucidation success, we applied Gaussian noise to and shifts of C_7O_2H_10, C_5N_3OH_7 and C_8OH_14 constitutional isomers, respectively.
Fig.<ref> a-b) depicts a sigmoidal shaped trend of Top-1 elucidation accuracies at increasing candidate pool sizes N_QM9 as a function of mean absolute error (MAE).
Note that increasing the maximum candidate pool size leads to an offset of the trend towards less permissible errors.
A possible explanation is the correlation of the density of chemical space with increasing numbers of candidate spectra N<cit.>.
As shift predictions need to become more accurate, limiting N through prior knowledge of the chemical space could be beneficial.
Similar findings have been reported by Sridharan et al.<cit.>, noting that brute force enumerations of chemical space lead to worse rankings than constrained graph generation.
Note that while the trends in and elucidation are similar, less error is permissible when using shifts.
To further reduce the ambiguity, we include both and shifts into the matching problem as per Eq.<ref>.
Results suggest 50% and ∼150% more permissible and errors when both spectra are considered in the matching process (Fig.<ref> c)).
Similar to how chemists solve the elucidation problem, the inclusion of more distinct properties increases the uniqueness and can improve the elucidation success.
§.§ Extrapolating the search space
Due to the limited amount of constitutional isomers in databases compared to the number of possible graphs faced during inverse design (Fig.<ref> b)), assessing the chemical shift accuracy for successful elucidation is severely limited.
As such, we extrapolate elucidation performance curves to obtain estimates about chemical shift accuracies in candidate pool sizes larger than QM9.
We fit each elucidation performance curve (Fig.<ref> a-b)), respectively, using a smoothly broken power law function:
f(x) = (1+ (x/x_b)^d)^α
with x_b controlling the upper bend and offset, d changing the curvature and α changing the tilt of the function (see SI Fig.2), respectively.
The parameters of Eq.<ref> as a function of N can again be fitted using a power law function (see SI Fig.2) and extrapolated to the total number of graphs N_Surge, respectively.
Results of the extrapolation (Fig.<ref> a-b) dashed) indicate significant differences in elucidation efficiency among stoichiometries.
For instance, C_8OH_14 queries are potentially easier to elucidate than C_5N_3OH_7 structures.
Possible reasons are the limited number of C_8OH_14 graphs compared to millions of C_5N_3OH_7 isomers.
Moreover, the number of heteroatoms of the C_5N_3OH_7 stoichiometry might hamper the characterization when only relying on or , respectively.
Hence, to solve the inverse structure elucidation problem using experimental data of compounds larger than QM9, reducing ambiguities through including both and shifts as well as to reduce the candidate space is critical for elucidation success.
§.§ Trends in chemical space
To analyse the elucidation efficiency throughout chemical space, we applied the Gaussian noise and extrapolation procedure to the 20 most common stoichiometries in QM9 (Fig.<ref> b)).
Fig.<ref> a) shows the MAE required for 95% elucidation success as a function of N_Surge.
Results suggest that less error is permissible for stoichiometries with large N_Surge and fewer carbon atoms.
As such, using only shifts might not be sufficient to fully characterize the compound.
Again, similar to how chemists use multiple NMR spectra to deduct chemical structures, additional information such as shifts are beneficial to extend the information content.
In Fig. <ref> b), the error permissiveness of spectra matching using only (see SI Fig.4 for ) versus combining both and is being compared, revealing a linear trend between both.
Note that the C_7NOH_7 stoichiometry shows the smallest benefit from adding additional information.
Interestingly, a hierarchy for C_7NOH_X stoichiometries of different degrees of unsaturation is visible, indicating an inverse correlation between number of hydrogens and MAE (Fig. <ref> b) green).
Similar hierarchies are also observed for other stoichiometries such as C_7O_2H_X and C_8OH_X (Fig. <ref> b) blue and orange).
On average, the combination of and for spectra matching increases the error permissiveness of and by 85% and 261% (see SI Fig.4), respectively.
§.§ Comparison to machine learned shift predictions
To test the elucidation performance using machine learning predictions, we trained and KRR models at increasing training set sizes (see SI Fig.5 for learning curves) and predicted chemical shifts of 56k C_7O_2H_10 constitutional isomers.
Results again show similar trends as observed with Gaussian noise (Fig.<ref> a-b)), however, indicate more permissive accuracy thresholds.
For instance, KRR predictions at 2 ppm MAE can identify 64% of queries rather than only 17% suggested by the Gaussian noise experiment.
The difference could be explained due the systematic, non uniform nature of the QM9<cit.> chemical space, influencing the shape and extrapolation of elucidation performance curves in Fig.<ref>.
Moreover, Gaussian noise is applied to all shifts at random compared to possibly more systematic machine learning predictions.
Note that the trade-off between error and N is consistent and that the exact parameters will depend on the machine learning model and the finite sampling of constitutional isomer space.
To model possible experimental noise on query spectra, we apply Gaussian noise to query spectra and evaluate the elucidation performance of the best performing machine learning model (see insets in Fig.<ref> a-b)).
Results indicate a halving of elucidation accuracy when the query spectrum contains up to 2 ppm MAE_Q in and 0.15 ppm MAE in error, respectively.
Thus, in the presence of experimental measurement noise even higher prediction accuracies might be necessary.
Combining both and spectra for matching improves the elucidation performance up to 90% (Fig.<ref> e)).
Again, the combination of spectra for elucidation highlights the effectiveness of reducing the ambiguity of the matching problem by including additional properties.
Investigating potential strategies to reduce the constitutional isomer search space, we constrained N based on functional groups (see SI Table 1).
Randomly selecting functional groups present in each query, N can be reduced by 50% and 62% on average (see Fig.<ref> d) inset for distributions), respectively.
Results in Fig.<ref> c-d) indicate an increase of the elucidation accuracy by 5% in and up to 10% for , respectively, in agreement with the elucidation performance in Fig.<ref> a-b).
Note that the knowledge of two functional groups only led to marginal improvements.
However, fragmentation could be more beneficial for larger compounds than present in QM9<cit.>, as reported by Yao et al.<cit.>.
Using both and shifts on the reduced search space only lead to marginal improvements of 0.5% over the results of the full search space.
§.§ Balancing search space and accuracy
We use performance curves to analyse the relationship between the elucidation performance of C_7O_2H_10 queries, machine learning prediction errors and candidate pool sizes N.
The systematic decay of performance curves (Fig.<ref> red and blue) again demonstrates that constraining N with prior knowledge allows for less accurate shift predictions to be applicable.
Extrapolating the performance curves indicates a machine learning MAE of 0.93 ppm to correctly rank 90% of queries out of 56k possible candidates (Fig.<ref> red), 0.02 ppm lower than suggested by Gaussian noise.
To reach an MAE of 0.93 ppm, four million training instances are required (Fig.<ref> orange).
Using both and shifts requires two orders
of magnitude less training data (Fig.<ref> blue).
As such, facing expensive experimental measurements and ab initio calculations, more effective inverse structure elucidation could be achieved by balancing machine learning data needs through reduced search spaces and incorporation of additional properties.
§ CONCLUSION
We have presented an analysis of the effectiveness of the NMR spectra matching task encountered in the inverse structure elucidation problem.
By systematically controlling the predictive accuracy of and chemical shifts, we found consistent trends throughout chemical compound space, suggesting that higher errors become permissible as the number of possible candidates decreases.
Note that while we relied on 1D ab initio NMR data, similar analysis could be performed using 1D or 2D experimental spectra.
Applications to the most common constitutional isomers in QM9 highlight that chemical spaces with many heteroatoms are harder to characterize when only relying on a single type of chemical shift.
Using both and chemical shifts increases the error permissiveness by 85% and 261% on average, respectively.
Machine learning predictions for 56k C_7O_2H_10 compounds showed that using both or shifts increased elucidation success to 90% compared to only 64% and 36% when used alone, respectively.
The usefulness of the analysis is expressed via performance curves, showing that training demands can be reduced by orders of magnitude compared to relying on specific shifts alone.
We believe that as the accuracy of machine learning models to distinguish spectra is limited, constrained search spaces or inclusion of more distinct properties are necessary to improve candidate rankings.
Rather than solely relying on more accurate models, future approaches could include explicit knowledge of chemical reactions, functional groups or data from mass spectrometry, infrared- or Raman spectroscopy<cit.>, respectively.
Finally, explicitly accounting for atomic similarities and chemical shift uncertainties via the DP5 probability might further increase the confidence in structure assignments<cit.>.
§ ACKNOWLEDGEMENT
O.A.v.L. has received funding from the European Research Council (ERC)
under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 772834).
O.A.v.L. has received support as the Ed Clark Chair of Advanced Materials and as a Canada CIFAR AI Chair.
Icons in Fig.<ref> from DBCLS, Openclipart and Simon Dürr from bioicons.com under CC-BY 4.0 and CC0, respectively.
§ DATA & CODE AVAILABILITY
The QM9-NMR dataset is openly available at <https://moldis.tifrh.res.in/data/QM9NMR>.
The code and additional data used in this study is available at <https://doi.org/10.5281/zenodo.8126380>.
§ CONFLICT OF INTEREST
The authors have no conflict of interest.
§ REFERENCES
ieeetr
|
http://arxiv.org/abs/2307.07378v1 | 20230714143658 | Defect Classification in Additive Manufacturing Using CNN-Based Vision Processing | [
"Xiao Liu",
"Alessandra Mileo",
"Alan F. Smeaton"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
Dublin City University, Glasnevin, Dublin 9, Ireland
[email protected]
Defect Classification in Additive Manufacturing
Using
CNN-Based Vision Processing
Xiao Liu, Alessandra Mileo and Alan F. Smeaton
=================================================================================
empty
The development of computer vision and in-situ monitoring using visual sensors allows the collection of large datasets from the additive manufacturing (AM) process. Such datasets could be used with machine learning techniques to improve the quality of AM. This paper examines two scenarios: first, using convolutional neural networks (CNNs) to accurately classify defects in an image dataset from AM and second, applying active learning techniques to the developed classification model. This allows the construction of a human-in-the-loop mechanism to reduce the size of the data required to train and generate training data.
Keywords: Convolutional neural networks, additive manufacturing, defect classification, active learning
§ INTRODUCTION
Large and openly available datasets of annotated images containing up to millions of training examples such as Pascal VOC <cit.> are available to machine learning researchers for many applications. This has enabled huge improvements in machine learning over recent years. By contrast, such openly available datasets are not available in the domain of Additive Manufacturing (AM) or 3D printing because labelled samples are difficult, expensive, and time-consuming to obtain as shown in <cit.> and <cit.>. As a result of poor data availability,
researches in AM often have to use only a limited amount of labelled samples for training tasks
before then leveraging a large number of unlabelled image data. Some researchers have called this the “small data challenge in the big data era” <cit.>.
To overcome this challenge, we present a method that applies transfer learning and fine-tuning on a CNN-based neural network model to achieve accurate classification of manufacturing defects. This uses a dataset of images of the melt pool, created
from the interaction between a laser and the materials used in manufacturing, taken during the AM process. Structural defects in the resulting output can sometimes be detected during manufacture from observations of the melt pool.
Our technique involves using active learning algorithms to reduce the number of labelled samples required in the training process. We perform automatic labelling using the model to generate larger datasets of labelled images from unlabelled samples, for use in training.
§ METHODS
Transfer learning is a method that performs training a neural network model using data from a source domain then later applying the trained model to a target domain that is different from the source. This allows rapid progress in re-training and significantly reduces the required number of training samples in the target domain. It is commonly used in computer vision tasks such as classification to support improved performance in domains which are data-poor.
In recent years, transfer learning has proved to be effective in the task of defect classification in AM, such as the work presented in <cit.> and <cit.> where transfer learning and fine-tuning were applied to the training of CNN based neural network architectures.
Active learning <cit.> is a technique for labelling data that selects and prioritises the most informative data points to submit to an annotator for labelling. Such prioritised data points have the highest potential impact on the supervised training of a machine learning model, thus accelerating the training process. The combination of transfer learning and active learning allows leveraging small amounts of labelled data to improve the performance of the training process of a deep learning model.
§ CLASSIFICATION EXPERIMENTS IN ADDITIVE MANUFACTURING
To investigate the potential for transfer learning and active learning in the task of defect detection in AM, a case study was carried out using the open image dataset from <cit.>. This contains 4,000 images, manually divided into 2 different defect detection classes in AM. The images in this dataset are clearly separated into 3 balanced subsets for training (2,000), testing (1,000) and validation (1,000).
To conduct experiments, we employed a VGG16 based classifier from previous work which proved to be accurate in the task of defect classification on images generated from emission monitoring during additive manufacturing <cit.>. This classifier relies on transfer learning in which 13 convolutional layers from a pre-trained VGG16 model are used for feature extraction and the weights in these layers had been trained using ImageNet data. After the convolutional layers, 2 dense layers with ReLU activation function are added followed by 1 dense layer as the output layer using Sigmoid as the activation function, since the targeted dataset are divided into 2 classes for binary classification. In the original paper <cit.>, the best classification performance is generated using a VGG16 based CNN model which is the reason we do not use a more recent model such as ResNet. We consider that as a baseline for further investigation in this study.
The tuning of hyperparameters involves adjusting the optimiser, learning rate, batch size and training epochs. There are 3 optimisers in the test we use which are Adam, SGD and RMSprop in combination with learning rate in a range from 10^-2 to 10^-5. We have also conducted training using different batch sizes (4, 32, 64) and training epochs (30, 60, 120). The cost function used in all tests is binary cross entropy. To reduce overfitting, weight regularisers are added to the 2 dense layers with the ReLU activation function mentioned above. The weight decay regulariser, also known as L2 regulariser which calculates the sum of the squared weights, is applied when initialising the keras model. The tuning of this hyperparameter is in a range from 10^-1 to 10^-4 and tested for multiple times until no obvious overfitting issue appears in the training and validation.
After tuning on hyperparameters for multiple combinations, the best preforming combinations regarding the 3 types of optimisers are shown in Table <ref> together with classification results on the validation dataset in comparison with the baseline from <cit.>. These initial tests were performed to check how adaptive our approach is on this dataset. The results show that all 3 optimisers can reach a value around 98% of the validation accuracy and our classification model is well-adapted to this dataset. The results also show that for this dataset a smaller batch size used in the training process such as 4, gives better performance and this can be explained as smaller batch sizes require more frequent weight updates during training. In turn this can help the model adjust its parameters more quickly and respond to changes in the data distribution which increases the model's ability to adapt to a new dataset. Finally, although not shown here, accuracy is stable thoughout the training showing no overfitting.
§ ACTIVE LEARNING EXPERIMENTS IN ADDITIVE MANUFACTURING
Having developed a classifier which uses domain transfer across AM image datasets, we extended training to include active learning applied to further investigate classification performance during the progression of AL iterations.
The second experiment was performed in a series of steps of (1) active sample section, (2) query for label, (3) train with queried sample, and (4) validate for current query iteration. The cycle iterates until a human supervisor decides to complete the training phase when validation accuracy achieves a target level.
Here we apply a pool-based sampling scenario and an uncertainty sampling query strategy <cit.>. This is the most commonly used query strategy to start generalised sampling on this particular AM dataset.
The implementation of active learning uses Python 3 and Google Colab. During the experiment, a classifier model is initialised and the optimser chosen is SGD as we found this gives more stabilised performance in the validation test and has minimal overfitting even when training is continued long after convergence. While Adam and RMSprop converge faster, there are larger fluctuations in the validation and minor overfitting after training reaches convergence. In addition, though SGD yields a result lower than the other 2 optimisers, it has slightly better potential that can be improved by applying active learning.
During this experiment, 2,000 training samples were fed to the classifier with a total of 40 queries and for each query 50 samples were actively selected by the uncertainty sampling query strategy.
The selected and queried samples were assigned a label by an annotator after which the newly labelled samples were used to fine-tune the classifier to improve performance. This was evaluated using classification accuracy on the validation dataset at the end of each query iteration and later we show results on the test set.
Following the inclusion of active learning, validation accuracy in each query iteration is shown in Figure <ref> where results show that with the aid of active learning, the model reaches convergence after the 13th query and the value of validation accuracy is around 98%. More specifically, the calculated mean value from the 13th to 40th queries is 0.981 with a standard deviation of 0.0246 and a peak of 0.990. This is slightly higher than the result of the SGD optimiser based model shown in Table <ref> and 1% higher than the baseline. Overall performance after convergence is also relatively stable. Results also show that the model only needs the first 650 most informative samples to achieve the best performance which is only 37.5% of the total 2,000 labelled training data.
This trained model was used to classify the labels on the testing dataset mentioned in Section <ref>, which is a balanced dataset consisting of 1,000 samples and the results are shown in Table <ref>.
§ CONCLUSIONS
This paper presents an investigation into performance of a computer vision based classification task on a dataset from the additive manufacturing process. We use a CNN based classifier in combination with transfer learning and active learning strategies. We improved the overall validation accuracy to about 98%. We also conducted experiments to investigate the approximate minimum number of labelled samples needed to reach convergence in training. In future work we plan to further investigate the sampling strategies for active learning especially regarding class imbalance problems. We will involve approaches from semi-supervised learning to reinforce the labelling and self training as an extension to the current active learning mechanism.
apalike
|
http://arxiv.org/abs/2307.06254v1 | 20230712154919 | Dynamics around the binary system (65803) Didymos | [
"R. Machado Oliveira",
"O. C. Winter",
"R. Sfair",
"G. Valvano",
"T. S. Moura",
"G. Borderes-Motta"
] | astro-ph.EP | [
"astro-ph.EP"
] |
Groups of Binary Operations and Binary G-Spaces
Pavel S. Gevorgyan
August 12, 2023
===============================================
R. Machado Oliveira^1, O. C. Winter^1, R. Sfair^1, G. Valvano^1, T. S. Moura^1, G. Borderes-Motta^2.
^1Grupo de Dinâmica Oribtal e Planetologia, São Paulo State University, UNESP, Guaratinguetá, CEP 12516-410, São Paulo, Brazil
^2Bioengineering and Aerospace Engineering Department, Universidad Carlos III de Madrid, Leganés, 28911, Madrid, Spain
First Author e-mail: [email protected]
ABSTRACT: Didymos and Dimorphos are primary and secondary, respectively, asteroids who compose a binary system that make up the set of Near Earth Asteroids (NEAs). They are targets of the Double Asteroid Redirection Test (DART), the first test mission dedicated to study of planetary defense, for which the main goal is to measure the changes caused after the secondary body is hit by a kinect impactor. The present work intends to conduct a study, through numerical integrations, on the dynamics of massless particles distributed in the vicinity of the two bodies. An approximate shape for the primary body was considered as a model of mass concentrations (mascons) and the secondary was considered as a massive point.
Our results show the location and size of stable regions, and also their lifetime.
KEYWORDS: Asteroids; Binary System; Computational Simulations; Mascons.
1.5
§ INTRODUCTION
The sub-kilometer asteroid Didymos and its moon Dimorphos form a binary system
classified as a Near-Earth Asteroid and member of both the Apollo and Amor
group[https://ssd.jpl.nasa.gov/sbdb.cgi?sstr=2065803].
The system was chosen as the target for the Double Asteroid Redirection Test (DART) mission,
the first one dedicated to study of planetary defense <cit.>. The main DART objective is to test
the asteroid deflection technique by intentionally impacting Dimorphos and measuring the changes in its orbit.
Ground-based observatories will also monitor these changes, and more precise
information about the orbital evolution of Dimorphos after the impact will be assessed by the Hera
mission[https://www.esa.int/Safety_Security/Hera/Hera].
Here we investigate the dynamics around the binary system, taking into account the
irregular shape of the primary and also the gravitational disturbance of Dimorphos.
Our goal is to analyze the orbital evolution of test particles
in the vicinity of the system, and verify the existence of stable regions where particles can
remain for an extended period and eventually pose a threat to the mission.
<cit.> made a similar study, but considering spacecraft (CubeSats) instead of natural particles. They adopted a gravitational model of Didymos, including spherical harmonics up to order and degree two, and integrating the system for short timescales (a few months).
Here we consider a more complex gravitational model based on a polyhedron shape model and look for orbital stability on much longer timescales (several years).
This paper is organized as follows: in Section <ref> we present the shape model adopted for Didymos,
while the numerical setup and the main results are described in Section <ref>.
The last section presents our final remarks.
§ SHAPE MODEL
Given the irregular shape of Didymos, particles at the vicinity of the body are subject to a gravitational potential
that cannot be reasonably modeled as originated from a point of mass or even an ellipsoid.
However, the proper shape model for Didymos is not publicly available, and given that the object resembles the asteroid Ryugu,
we assumed the shape as a scaled version of the latter.
From ground-based radar observations, <cit.> determined an equivalent diameter of 780 m
and <cit.> reported a bulk density as 2.17 gcm^-3 for the primary.
Starting from a polyhedra representation of Ryugu composed of 574 vertices and 1144 triangular faces <cit.>,
we determined the scaling factor that must be applied to Ryugu's shape to match the expected mass and volume for Didymos.
A comparison between the two models is shown in Fig. <ref>.
Instead of computing the gravitational potential directly from the polyhedron <cit.>,
our simulations were carried out with the more efficient, and yet precise,
approach of mass concentrations – mascons <cit.>, implemented in the N-BOM package <cit.>.
From a regularly spaced tridimensional grid that encompasses the object, the mascons are those grid points
that lie within the shape model. Assuming that Didymos is homogeneous,
the total mass of the asteroid is evenly distributed among the 26070 mascons.
In our model, Didymos rotates with a period of 2.26 hours <cit.>. Dimorphos is represented as a point of
mass with 3.45x10^9 kg orbiting the primary with a semimajor axis of 1178 m, and
the eccentricity of 0.05 <cit.>.
§ STABLE REGIONS
In order to explore the stability in the neighbourhood of the two bodies, we randomly distributed 20 thousand massless particles in a radial range between 0.45 and 1.45 km.
The initial conditions of the particles were determined by keplerian orbits with eccentricity and inclination equal to zero.
The system was integrated for five years, which is the approximated interval between the DART[https://dart.jhuapl.edu/Mission/index.php] and HERA[https://www.heramission.space/] missions.
The criterion for the remotion from the system was a collision or an ejection. For the collision we took into account the approximated dimension of the primary's mascons model and the estimated mean radius of the secondary given by <cit.>.
In the case of ejection, it was adopted 6 km as the maximum radial distance from the primary, the same as considered in <cit.>. This distance is about five times the value of the semi-major axis of Dimorphos.
The results are summarized in Fig. <ref>. It shows the particles at their initial positions and the colors indicate the fate of the particles at the end of the simulations.
After five years of simulations, 8.74% were ejected, 30.3% collided with Didymos, 56.94% collided with Dimorphos and just 4.02% survived.
From the features of this plot can be made some considerations.
The particles that collided are initially spread over several regions (left plot).
However, there is a pattern separating the the two groups of collision.
As can be seen in the right plot, the particles that collided with Dimorphos are mainly located in two spiral arms that depart from the satellite. In the case of particles that collided with Didymos, they can be separated into two groups: one in a disk inside the orbit of Dimorphos and other involving the stable regions around the triangular lagrangian points.
On the other hand, the particles that were ejected and those that survived are found initially in specific regions. The ejected particles were preferentially in places close to the orbit of Dimorphos. Their fate was determined by close encounters with the secondary.
In the case of the survivors, Fig. <ref> (left plot) shows that they are initially located in the coorbital region, around the lagrangian points L_4 and L_5, and also in arcs of a ring formed between the two bodies.
The particles around one of the two triangular lagrangian points present behaviours expected in binary systems. Since these are stable equilibrium points for the given mass ratio of the binary,
it is coherent to have survivors around them. These particles show a tadpole like trajectory, as in the example given in Fig. <ref> (left plot).
The trajectories of the particles in the stable regions close to Didymos show a very narrow amplitude of radial variation. An example is given in Fig. <ref> (right plot).
An idea of the time evolution of the system as a whole can be seen in Fig. <ref>.
This plot shows the lifetime of the particles according to their initial location. The survivors are indicated in green.
The system as seen in Fig. <ref> is quickly defined, since 79% of the particles are removed within 10 days.
Those that were removed in just a few hours are the ones that were close to the surface of Didymos and those that were in spiral arms associated to Dimorphos. The satellite is the responsible for the large majority of this fast removal in the first orbital periods of the particles, by collision with Dimorphos or ejection of the region.
Note in the Fig. <ref> that the particles which live longer (very dark color) are in the neighbourhood of the survivors (in green). Some of these particles can stay alive for even more than one year of integration.
The orbital period of the particles in the dark ring shown in Fig. <ref> is a value in the range between five and six hours.
A comparison of these results with those of <cit.> shows that the general structure is similar. However, in their work was found a large stable ring of initial conditions close to Didymos which is not the case in our results. That is certainly due to the large difference in timescales considered (a few months for them, while some years in ours), but the effects due to different gravitational potential models also contributed in the evolution of the trajectories with initial conditions closer to Didymos.
§ FINAL COMMENTS
In this work we explored the long term stability of massless particles in the Didymos-Dimorphos system.
In order to take into account the gravitational contribution of Didymos, we developed an approximated shape model in terms of a polyhedron composed of triangular faces and then, transformed it in a model of Mascons.
The results clearly show the location and size of the stable regions.
An analysis of the lifetime of the particles indicated that the huge majority of the particles with unstable trajectories are removed in just ten days.
The largest amount of particles in the stable regions have tadpole like trajectories around the triangular lagrangian equilibrium points.
§ ACKNOWLEDGEMENTS
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001, Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) - Proc. 2016/24561-0 and Proc. 2020/14307-4, Conselho Nacional de Desenvolvimento Cientifico e Tecnológico (CNPq) - Proc. 120338/2020-3 and Proc. 305210/2018-1
apalike
|
http://arxiv.org/abs/2307.04320v1 | 20230710032804 | Collimated hot electron generation from sub-wavelength grating target irradiated by a femtosecond laser pulse of relativistic intensity | [
"Kamalesh Jana",
"Amit D. Lad",
"Guo-Bo Zhang",
"Bo-Yuan Li",
"V. Rakesh Kumar",
"Moniruzzaman Shaikh",
"Yash M. Ved",
"Min Chen",
"G. Ravindra Kumar"
] | physics.plasm-ph | [
"physics.plasm-ph"
] |
,s
|
http://arxiv.org/abs/2307.04495v1 | 20230710113346 | Model-Driven Engineering Method to Support the Formalization of Machine Learning using SysML | [
"Simon Raedler",
"Juergen Mangler",
"Stefanie Rinderle-Ma"
] | cs.SE | [
"cs.SE",
"cs.AI",
"H.1.0; I.2.4"
] |
Model-Driven Engineering for Machine Learning]Model-Driven Engineering Method to Support the Formalization of Machine Learning using SysML
[1,2]Simon [email protected]
1]Juergen [email protected]
1]Stefanie [email protected]
[1]TUM School of Computation, Information and Technology; Department of Computer Science, Technical University of Munich, Boltzmannstraße 3, Garching b. München, 85748, Germany
[2]Business Informatics Group, Technical University of Vienna, Favoritenstraße 9-11/194-3, Vienna, 1040, Austria
Motivation: Systems Engineering is a transdisciplinary and integrative approach, that enables the design, integration, and management of complex systems in systems engineering life cycles. In order to use data generated by cyber-physical systems (CPS), systems engineers cooperate with data scientists, to develop customized mechanisms for data extraction, data preparation, and/or data transformation. While interfaces in CPS systems may be generic, data generated for custom applications must be transformed and merged in specific ways so that insights into the data can be interpreted by system engineers or dedicated applications to gain additional insights. To foster efficient cooperation between systems engineers and data scientists, the systems engineers have to provide a fine-grained specification that describes (a) all parts of the CPS, (b) how the CPS might interact, (c) what data is exchanged between them, (d) how the data interrelates, and (e) what are the requirements and goals of the data extraction. A data scientist can then iteratively (including further refinements of the specification) prepare the necessary custom machine-learning models and components.
Methods: This work introduces a method supporting the collaborative definition of machine learning tasks by leveraging model-based engineering in the formalization of the systems modeling language SysML. The method supports the identification and integration of various data sources, the required definition of semantic connections between data attributes, and the definition of data processing steps within the machine learning support.
Results: By consolidating the knowledge of domain and machine learning experts, a powerful tool to describe machine learning tasks by formalizing knowledge using the systems modeling language SysML is introduced. The method is evaluated based on two use cases, i.e., a smart weather system that allows to predict weather forecasts based on sensor data, and a waste prevention case for 3D printer filament that cancels the printing if the intended result cannot be achieved (image processing). Further, a user study is conducted to gather insights of potential users regarding perceived workload and usability of the elaborated method.
Conclusion: Integrating machine learning-specific properties in systems engineering techniques allows non-data scientists to understand formalized knowledge and define specific aspects of a machine learning problem, document knowledge on the data, and to further support data scientists to use the formalized knowledge as input for an implementation using (semi-) automatic code generation. In this respect, this work contributes by consolidating knowledge from various domains and therefore, fosters the integration of machine learning in industry by involving several stakeholders.
[
[
=====
Acknowledgments
This project has been partially supported and funded by the Austrian Research Promotion Agency (FFG) via the Austrian Competence Center for Digital Production (CDP) under the contract number 881843.
§ INTRODUCTION
Leveraging data to allow experts making informed decisions during the product lifecycle of a product is recently defined as data-driven engineering <cit.>.
The knowledge required for implementing data-driven engineering can be characterized in a two-fold way <cit.>, i.e., by i) profound machine learning skills with respect to processing and analytics of data and implementation of algorithms, and ii) by domain knowledge regarding the product of interest, relevant product lifecycle data, and related business processes with the entangled IT infrastructures to identify data provenance and information flows.
Regarding i) profound machine learning skills, a recent industrial survey revealed that companies have fewer machine learning experts and too little knowledge to implement solutions themselves. Further, few experts are available on the market <cit.>.
To still connect the domain and machine learning knowledge, various methods have been recently proposed in literature <cit.>.
However, these methods lack support for defining machine learning tasks and do not sufficiently represent the perspective of engineers.
Additionally, the methods mainly integrate engineering methods into data science methodologies supporting data scientists rather than allowing engineers to apply the methods to support the elaboration of machine learning support.
Therefore, this work aims to integrate machine learning knowledge into systems engineering to support engineers in the definition of machine learning tasks, to consequently enable data-driven engineering and, ultimately, to support the product development for the definition of prerequisites for the machine learning integration. Particularly, means of Model-Based Engineering (MBE) are adapted to define tasks for data-driven engineering by leveraging data from the product lifecycle of a system.
The method of this work builds upon the systems modeling language SysML <cit.>, a general-purpose modeling language allowing to formalize a system from various viewpoints and disciplines. The interdisciplinary formalization of systems knowledge refers to the term Model-Based Systems Engineering (MBSE) <cit.>. Additionally, the CRISP-DM <cit.> methodology is used as a basis for the organization of the machine learning task definition. The Cross-Industry Standard Process for Data Mining (CRISP-DM) is a methodology consisting of common approaches used by data mining professionals to work out a data mining project from inception (requirements and business understanding) through processing (data understanding, data preparation and modeling) to evaluation and deployment.
Ultimately, the method proposed in this work aims to formalize machine learning tasks during product development and to use the formalized knowledge to derive parts of the machine learning and to guide the implementation, respectively. The method is evaluated using a case study representing a weather station with multiple subsystems to predict weather forecasts and a second study to prevent wasting of 3D printer filament by canceling the printing if the intended result cannot be achieved.
The contribution of this work is manifold:
* The proposal of a SysML metamodel extension to include stereotypes that are used to describe machine learning functions for domain-specific data objects
* A method that fits to the latest research areas of the modeling community and is called MDE4AI <cit.>
* A means of structuring the models based on the CRISP-DM methodology.
* Two case studies using the proposed concepts for modeling machine learning support based on simple input data, followed by a discussion of the strengths and weaknesses of the method.
* A user study showing the workload and usability of the method as rated by experts and computer scientists.
This work lays a foundation for allowing non-programmers to define machine learning tasks by formalizing knowledge from the problem domain into a high-level model and to communicate formalized knowledge.
Additionally, semantic connection of data from various Product-Lifecycle Management (PLM) <cit.> sources allows to describe the origination and composition of data relations.
With the availability of such models, the goal is to support the automatic decomposition of SysML models and the (semi-)automatic generation of executable machine learning modules.
This work constitutes an extension of our previous work presented in
<cit.> and expands <cit.> in several ways by
* providing more extensive background information to foster understanding.
* extending the presented method with a generic and fine-grained sample of the modeling method.
* applying the method in two case studies from industry.
* conducting a user study on the perceived workload and usability of mechanical engineers and computer scientists
* discussing advantages and disadvantages of the method in a more thorough way.
The remainder of this paper is structured as follows: Section <ref> presents the background regarding MBSE, data science methodologies and related work of data-driven engineering. In Section <ref>, the elaborated method is introduced in detail and evaluated based on two case studies in Section <ref>. Further, a user study is presented in Section <ref> that evaluates the perceived workload and the usability of the method with mechanical engineers and computer scientists.
Based on the findings of the evaluation and the user study, an extensive discussion on advantages and disadvantages is presented in Section <ref>. Finally, the study is summarized in conclusion with future remarks in Section <ref>.
§ BACKGROUND
First, the concepts of model-based systems engineering (MBSE) and the systems modeling language SysML are explained. Second, machine learning and the CRISP-DM <cit.> methodology are introduced, acting as a basis for the method presented in Section <ref>. Next, related methods are depicted with special focus on data-driven engineering. Finally, Section <ref> presents a summary of the background.
§.§ Model-Based Systems Engineering and SysML
Systems engineering, particularly MBSE, aims to integrate various engineering disciplines in product development to establish a single-source of truth by formalizing system requirements, behavior, structure and parametric relations of a system. Conventional systems engineering focuses on storing artifacts in several (text) documents maintained in case of changes. In a model-based method, the relevant information to describe an abstract system are stored in a model <cit.>. The literature concerning graphical MBSE methods promises to increase design performance while supporting the communication of relevant stakeholders of a system <cit.>.
MBSE is a term explicitly considering aspects of a system. Nevertheless, other terms can be considered interchangeable depending on the level of automation and the focus of the application[See <https://modelling-languages.com/clarifying-concepts-mbe-vs-mde-vs-mdd-vs-mda/> for a discussion.]. Independent of the level of automation and the focus of the modeling language, a metamodel defines the modeling concept, relations and all possible instances of a specific set of models. Models are instances of metamodels describing a specific system. The model characteristics must match all aspects of the associated metamodel. However, extensions such as additional attributes can be added directly on a model without changing the metamodel. If a metamodel does not represent an aspect, an extension for a specific group of use cases can be defined using so-called stereotypes <cit.>. A stereotype is a means of modeling to extend metaclasses by defining additional semantics for a specific class concept. A metaclass is a class describing a set of classes, e.g. the metaclass block is a general purpose structuring mechanism that describes a system, subsystem, logical or physical component without the software-specific details implicitly given in UML structured classes <cit.>.
The use of stereotypes in modeling methods have been proven to support the understanding and standardization of a model <cit.>.
In MBSE, the Systems Modeling Language SysML is the most prominent modeling language <cit.>. SysML is based on the UML standard with a special focus on the formalization of systems instead of modeling classes and objects for software engineering. The language supports the formalization of structural, behavioral and functional specifications <cit.>.
Structural diagrams describe the composition of systems and subsystems with their attributes and relations <cit.>. Figure <ref> depicts core elements of a block definition diagram modeled in the Eclipse-based open-source software Papyrus[<https://www.eclipse.org/papyrus/index.php>]. On top of <ref>, a Block with the name Human is defined, consisting of one attribute of type String with the attribute name Name and the visibility public indicated by the plus (+). A block can also have operations, ports etc. which are not relevant for this work and, therefore not introduced here. Underneath the Human-Block, two inheriting elements are defined by the white arrows between the blocks. The attribute Name is inherited from the parent block marked by the tailing dash. One child has an additional property Age, which only affects the block (as long as no deeper inheritance is available). The second block consists of a subsystem, indicated by the black diamond being a part association (a.k.a. composition). A part association determines that a block describes a whole element and a part of the whole element is additionally described in another element[See <https://sysmlforum.com/sysml-faq/what-are-diff-among-part-shared-referenced-associations.html> for a discussion]. The 1 and the 0..2 indicate the multiplicity, allowing to define the cardinality, e.g. number of elements. In this sample, it means one element Child2 can have zero, one or two legs. The white diamond between Leg and Shoe indicates a shared association, which is a weaker form of the part association. It refers to a relationship where the part element is still valid if the whole element is deleted, e.g. if the element Leg is not valid anymore, the Shoe is still valid. The multiplicity * indicates that one can have any number of shoes.
Since various software represent slightly different parts, the description of the block definition diagram can vary.
In SysML, the execution of single activities can be modeled using activity diagrams. A state diagram has an entry-point and an exit-point. The arrow between the states indicates a transition and describes that one state has been completed and another is active. Behind a state, the execution of one or multiple activities can be triggered, whereas an activity is a sequential execution of single actions <cit.>, see <ref>.
§.§ Data Science and Methodologies
Data Science and Business Intelligence refer to the extraction of information and knowledge from data through analysis to assist people with various types of insights, such as analysis or prediction, among many others <cit.>.
The digging of such information to derive knowledge is called data mining (DM)<cit.>.
Machine learning (ML) is one subfield of DM, which automatically allows computer programs to improve through experience <cit.>.
Machine learning algorithms aim to solve a (specific) problem to eliminate the need for being explicitly programmed <cit.>.
To support the implementation of machine learning applications, methodologies have been proposed in a general manner <cit.>. Additionally, extensions of such methods with particular support for data science in the engineering domain are introduced <cit.>.
In literature, the methods of the CIRSP-DM <cit.> and KDD <cit.> are assessed in a comparative study <cit.>. According to <cit.>, CRISP-DM is a kind of implementation of the KDD process. In the following, CRISP-DM is described and used as basis for the structure of the proposed method described in Section <ref>.
In CRISP-DM, six core steps are defined supporting the implementation of a DM application:
* Business Understanding: Project objectives, requirements and an understanding from a business level is achieved. Based thereon, a DM problem is defined and a rough roadmap is elaborated.
* Data Understanding: Data is collected to understand the situation from a data point of view.
* Data Preparation The construction of the final dataset for the learning algorithm based on raw data and data transformations.
* Modeling: Various or sometimes one algorithm is selected and applied to elaborated dataset from the previous step. In this step, so-called hyperparameter tuning is applied to vary on parameter values and achieve a most valuable result.
* Evaluation: The result of the algorithm is evaluated against metrics and the objectives from the first step.
* Deployment: The achievements are presented in a way that a customer or an implementation team can use it for further integration.
§.§ Related Work
In literature, various methods supporting the formalization of data-driven engineering or machine learning using modeling languages, are given.
The method of <cit.> is based on the Kevoree Modeling Framework KMF <cit.>, which is similar to the Eclipse Modeling Framework (EMF) that is the basis for the open source modeling framework Papyrus[<https://www.eclipse.org/papyrus/>]. <cit.> proposes to model the domain knowledge and small learning units in a single domain modeling method since both are highly entangled. The method is based on a textual modeling syntax and describes what should be learned, how and from which attributes and relations. Additionally, templates are given to render code based on the model. However, the open-source framework seems to be out of maintenance since the repository is not updated since 2017[<https://github.com/dukeboard/kevoree-modeling-framework>].
An active maintained framework family with means to model machine learning is shown in <cit.>. The method is based on the MontiAnna framework <cit.> and focuses on modeling artificial neural networks. The MoniAnna framework is part of the MontiCore Workbench Family<cit.>.
Similar to <cit.>, textual modeling is used to formalize the learning units and related input and output. The formalization is used as input for template-based code generation. However, the method does not reflect domain-specific (business) knowledge from an engineering perspective.
In <cit.>, focus is put on the integration of executable machine learning units modeled on a cloud platform, enabling the fast deployment of distributed systems. However, the method is stiff regarding extendability and advanced data preparation as of the current development state.Additionally, the integration of domain knowledge is hardly given and the focus on the formalisation of data-driven algorithms is not present.
The integration of ML in CPS modeling is supported by the textual modeling framework ThingML+<cit.>. The method extends the ThingML <cit.> modeling method, intended to support the development of IoT devices. As with the other methods, focus is put on machine learning modeling without considering domain knowledge. The method allows deriving executable code based on model transformation using xtext.
§.§ Summary
MBSE has been proven beneficial in increasing the design performance of systems <cit.>. According to <cit.>, the number of components and functions are increasing in future, leading to more complex systems, requiring advanced support in the development and analysis using means of data science.
Development support for data science is given in methodologies such as CRISP-DM. However, guidance specific for the engineering domain is limited <cit.> and the integration in a model-based method is unavailable as of the author's knowledge.
In literature, various methods introduce specific metamodels and languages to describe a data science task and eventually enable to derive executable code. However, the methods are not based on a MBSE compatible modeling language such as SysML rather than introducing single domain-specific modeling environments.
Therefore, little support for interdisciplinary communication is given and the methods are more applicable for computer scientists than to domain outsiders such as mechanical engineers with little knowledge in programming. Moreover, the domain-specific modeling methods are not aligned with the CRISP-DM methodology, leading to little support from a methodological perspective. Last but not least, the proposed methods use model transformation to reduce the implementation effort, but are seldomly built in a generic way, allowing to extend the modeling or the derivation of code without extensive changes in the generation. Therefore, maintenance and applicability in practice is rather limited.
§ METHOD
This section describes a method to formalize machine learning tasks based on SysML and the application of an extended metamodel.
In the following, first, the extension of the SysML metamodel using stereotypes is described.
Special attention is given to the package structure for organizing the stereotypes, extensibility for different purposes, and generalization so that stereotypes can be used for multiple use cases.
Second, a package structure aligned with the CRISP-DM methodology is presented, enabling to guide the application of the newly defined stereotypes.
Next, a syntax and semantic is introduced, allowing to interpret the formalized machine learning model enriched with the introduced stereotypes.
Finally, means of SysML state diagram is used to define the tasks' execution order.
§.§ Metamodel Extension using Stereotypes
In the following subsections, six packages are introduced, which allow to group stereotypes that semantically describe required functionalities.
Subsequently, an exemplary stereotype hierarchy for defining higher-order functions for domain-specific data transformation purposes is described in detail.
§.§.§ Stereotype Package Structure
SysML packages are used to group and organize a model and to reduce the complexity of system parts.
Similarly, it can be applied for the organization of stereotypes, as depicted in Figure <ref>.
The organization of the stereotypes is as follows: in Common, general stereotypes are defined that are used in other packages as basis, e.g. a stereotype ML is defined in Common, each defined stereotype related to machine learning inherits from this stereotype to indicate that it is a machine learning stereotype. Additionally, stereotypes can be defined allowing to categorize other stereotypes, e.g. an abstract Pre-Processing stereotype allows to identify that all inheriting stereotypes are introduced for the data preparation step of the CRISP-DM methodology.
In Attributes, stereotypes for a more detailed definition of attributes are defined. These attribute stereotypes cannot be applied to blocks, only to attributes of a block. Consequently, the stereotypes extend primitive data types such as Integer or Float. The purpose of the extension are additional characteristics to describe the data, e.g. valid ranges of a value or the format of a datetime property or a regular expression to collect or describe a part of a text value.
The package DataStorage defines available data interfaces from a general perspective required for the loading and processing of data from various data sources, e.g. SQL servers, Application Programmable Interface (API) or other file formats (e.g. CSV).
The purpose of the stereotypes are to support the data understanding of the CRISP-DM methodology. Additionally, it allows to bridge the gap between business and data understanding due to the explicit formats. Further details in Section <ref>.
In the Algorithm package, various machine learning algorithms are defined and grouped with respect to algorithm types, e.g. regression or clustering algorithms. Particularly, the focus is put on key characteristics of an algorithm implementation, such as mandatory hyper-parameter or the stereotype description. Optional algorithm parameters are not described in the stereotype, but can be added during the modeling, as later illustrated in Figure <ref>.
The PreProcessing package (a.k.a. as data preparation) is the most complex and extensive package due to the number of functionalities required. Additionally, a survey revealed that computer scientists spend the most effort in preparing and cleaning data <cit.>. Within this package, functions are defined allowing to transform data so that a cleaned and applicable dataset for the machine learning algorithm is defined.
Finally, the AlgorithmWorkflow package, consisting of stereotypes for states of the state diagram, allowing to define the implementation order of the machine learning tasks. Typically in SysML, states are connected to activities, which are a sequence of execution steps. However, in practice, we found out that it is very time consuming to prepare activities first. Additionally, a function abstracted as a single block can be considered as a set of activities. Consequently, state diagrams are used instead of activity diagrams to reduce the implementation effort and complexity.
§.§.§ Stereotypes Hierarchy
As mentioned in Section <ref>, each package represents a specific hierarchy of stereotypes, allowing to describe various aspects of machine learning subtasks.
An example definition of stereotypes related to data pre-processing is depicted in Figure <ref>. As described in Section <ref>, stereotypes can be hierarchically composed to describe specific attributes only once for a set of stereotypes.
On top, the ML stereotype defined in the Common package is depicted, indicating that all inheriting stereotypes are related to machine learning. Formalizing a machine learning task is intended to be iteratively, which is why some stereotypes are abstract, illustrated by italic letters.
If a stereotype is abstract, it means that the stereotype requires further detailization or that a child stereotype with additional information is required, e.g., DataTransformation cannot be used without further details as it can be arbitrary transformation of data.
The purpose of abstraction is to support the early definition of tasks in the product development without details already being known, e.g., the final file-format used to store the data.
From top to bottom in Figure <ref>, the level of detail increases and the task is more fine-grained chosen. Consequently, leaves are the most fine-grain representation. The inheritance additionally allows to group functions of a specific kind, e.g., functions regarding outlier detection etc.
Due to the grouping of functions, the composition of stereotypes strongly depends on the preferences of the implementing expert and the purpose of the composition in terms of inheritance of attributes.
Note that attributes defined in a parent stereotype are also available in a child or grandchild stereotype, respectively. Therefore, each level should only represent mandatory attributes. This especially applies for algorithms with a lot of hyper-parameters, e.g. logistic regression with more than 20 parameter and attributes[<https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html>]. In case a parameter is not defined in the stereotype, it sill can be add during the modeling and application of the stereotypes. A sample can be found in Section <ref>.
Additionally, it is possible to add a set of values using Enumerations for a single attribute, e.g. MissingValueFunction highlighted in green. In this respect, modeling is more precise and guided by a fixed set of valid options.
Similarly, specific stereotypes can be used as an attribute, which means that only blocks or attributes that apply the specific stereotype can be assigned, e.g. Method_Attribute_Input indicating that only properties with a stereotype defined in the package Attributes can be applied because each attribute stereotype inherit from that stereotype.
Finally, the application of the keyword BlackBox can be used if a function shall be hidden due to security reasons or the implementation is unknown, e.g. BlackBox_Outliers on the right side of Figure <ref>.
§.§ Package structure guiding the implementation.
CRISP-DM as described in Section <ref> consists of six steps, each describing a specific aspect required for the development of a machine learning project. Figure <ref> illustrates the package structure aligned with the CRISP-DM methodology.
Business Understanding consists of block definition diagrams describing the system under study with the composition from a system configuration point of view. In this respect, the VAMOS method (Variant Modeling with SysML, <cit.>) is integrated to describe a specific system configuration. The integration of the VAMOS method focuses on the data interfaces and attributes of a particular configuration of a system, as different configurations of a system might lead to other data output. In this method, the VAMOS method is used to focus on data interfaces. Therefore, other systems engineering knowledge is presented in other diagrams, which is out of the scope of this work. Still, the knowledge modeled in other diagrams is connected to the instance of a block used in the VAMOS method and therefore, multiple disciplines are enabled to work on the same model.
The second step, Data Understanding, details the Business Understanding with the definition of delivered data on an attribute and data format level. Particularly, the data type and the name of the delivered data attribute are described using block definition diagrams. Additionally, attribute stereotypes are used to describe the data in detail as described in Section <ref>. With the application of stereotypes on a block level, the type of data interface is defined, e.g. CSV files or SQL servers. As a result of the formalization of the interfaces in this package: The information exchange between the systems engineering and the data engineering can be considered as completed.
Based on the Data Understanding, the Pre-Processing is applied to transform and prepare the data in a final dataset that can be used in the Modeling. In the Pre-Processing, the most effort is required due to the possible number of required data transformations to create a dataset usable for machine learning. The result of the Pre-Processing is a final dataset, considered to be ready for the machine learning algorithm.
Within the Modeling, algorithms are applied to the final dataset. Additionally, train-test-splitting and other required functions on the machine learning algorithm are applied.
In the Evaluation package, various metrics are used to asses and prove the validity of the algorithm result of the Modeling package.
Finally, the Workflow package, which describes the execution order of the formalization in the previous packages using state diagrams. For each state, a custom stereotype is applied allowing to connect a block that is connected to a stereotype inherited from ML. The method to assign blocks to states allows to overcome the necessity to define activities, making the method less heavy for the application and reduces time for the formalization of the machine learning.
Typically in CRISP-DM, the very last step is the deployment. However, the deployment is considered out of scope in this work and therefore the method ends with the workflow.
§.§ Syntax and Semantics
For the purpose of implementing ML functionalities, the utilization of
functional programming paradigm is intuitive <cit.>. It
utilizes higher order functions, invoked on (data-)objects which are returning
objects. This allows for step-by-step decomposition, filtering and
transformation of data, without side-effects (changes to variables), in
comparison to the imperative programming paradigm.
This sequence of function invocation aligns well with how UML and other
modeling languages implement abstraction-levels to reflect a relevant selection
of properties to focus on the aspects of interest
<cit.>. Functions are blackboxes with processing
capability that are associated with (data-)artifacts upon which they can be
called, and are associated with data-artifacts they produce as output. The
abstraction is realized by describing functions or a set of functions with a
single stereotype and instances with blocks.
A class in UML is defined among others by attributes, stereotypes, operations (methods), constraints and relationships to other classes. In SysML, a block describes a system or subsystem with a similar definition as a class in UML. A machine learning task and the respective subtasks can be seen as a system with subsystems. Therefore, each subtask is modeled using blocks, aligned with the syntax described in section <ref>. Particularly, only input values represented as attributes of a block and the relation to other blocks are modeled. The operations (methods) are defined as stereotypes with abstracted implementations. Attributes defined on the stereotype are mandatory input values for the definition of a machine learning subtask. The attributes defined on a block itself are optional for documentation or to extend the stereotype with fine-grained details, e.g. utc attribute in the Format_Date2 block in Figure <ref>. The output of a subtask (block) is implicitly defined in the implementation of the code snippet related to a stereotype and not explicitly depicted in the model. The output of a block can be used as input for other blocks, e.g. CSV_1 block as input for the Format_Date block. Figure <ref> depicts a few samples of the aforementioned syntax and semantics. On top right, a date conversion subtask is modeled as Format_Date. The date conversion stereotype has a mandatory attribute to define the format of the output of the conversion. The input for the date conversion is the block CSV_1, connected using a part association. In this sample, the date attribute is the only input value matching due to the stereotype Datetime. However, if the input is ambiguous because the datetime is stored for instance as integer or multiple attributes of the connected block are in the correct input format, it is necessary to add additional attributes to the date conversion to select the particular input, e.g. with a new attribute which value is the particular input attribute from the connected block.
The block Format_Date2 inherits from Format_Date. Therefore, the input and the attributes are the same except of manual overwritten values, e.g. changes on the output datetime format or the added additional attribute utc. Another sample in Figure <ref> shows the integration of multiple inputs. The Merge_DF block consists of two input blocks and the attributes on which the merging function shall be applied are defined using an attribute that consists of two values (MergeOn). The MergeOn attribute is mandatory and therefore defined on the stereotype.Although the implicit execution order of the subtasks is defined by the associations and the necessity to compute first inputs, the execution order might be ambiguous, e.g. execute first the Format_Date or the Merge_DF. As described in section <ref>, structural diagram elements, such as blocks, requires the integration in behavioral diagrams to allow the definition of an execution order <cit.>.To enable the connection of a block with a state in a state diagram, custom stereotypes are applied. The stereotypes for the states consist of a single mandatory attribute. The mandatory attribute references a block with a stereotype that inheritate from the root parent stereotype ML.
§ CASE STUDIES
This section presents two case studies, i.e., a weather system that predicts weather forecasts based on sensor data, and an image similarity check that makes it possible to assess whether the actual print of a 3D model with a 3D printer corresponds to the desired output.
As a result, the printing process can be stopped prematurely, saving filament and time.
§.§ UC1 - Weather Forecast based on Sensor Data
Figure <ref> illustrates the composition of the weather system that is split in two parts. On the left side, a local station is equipped with various sensors, delivering a CSV file with measuring and on the right side, a weather forecast additionally delivers a CSV file with weather forecasts over the internet.
From a systems engineering perspective, the weather system is a cyber-physical system and can be configured with various sensors.
Figure <ref> depicts the SysML model of the weather system with a specific configuration aligned with Figure <ref>.
Particularly, Figure <ref> depicts an method aligned with <cit.> that allows to formalize variations. Additionally, the modeling of the system from an business perspective is the first step of the method. Focus is put on the values of interest, which are the output values of the subsystems, to keep the business understanding as concise as possible. In the middle of the figure, the core weather system configuration is depicted. The surrounding subsystems are sensors or subsystems, e.g., an API (right side). The attributes of the sensors are output values of each subsystems to align with the CRISP-DM business understanding that aims to get a general idea of the system and from where data originates.
To transform the business understanding in valuable data understanding, connections between the system in the business understanding and output data formats are established.
Particularly, a realization connection between the CPS and blocks describing the data format using stereotypes inheriting from ML are modeled.
In the blocks, each attribute has a type representing the actual data type in the data source and a stereotype with a ML attribute describing the representation in the machine learning method, e.g., CSV_2 attribute date_date is of type String and is mapped to the stereotype Datetime that considers aspects such as the datetime format.
Additionally, stereotype attributes are defined such as the Encoding or the Delimiter to describe the composition of the CSV file.
Figure <ref> depicts a set of subtasks applied to the data sources defined in Figure <ref>.
For and explanation of Figure <ref>, please refer to Section <ref>.
Figure <ref> illustrates the application of a train-test-split and the integration of the split data into two different regression algorithms, which are specified in a mandatory attribute. As of the definition of the stereotypes, no further parameters are mandatory. For the RandomForestRegressor, the optional hyper-parameter max_depth is defined.
Figure <ref> depicts the prediction and the application of metrics such as mean absolute error (MAE). The mandatory parameter text is a placeholder allowing to add text that shall be implemented with the evaluation result.
The method's final step is integrating the blocks into an execution workflow. Figure <ref> illustrates the execution order of the algorithm steps. As can be seen, the Format_Date2 block modeled in Figure <ref> is not depicted in the workflow, meaning that it is not taken into concern during the implementation and is left out as an artifact from the formalization time. The state's name is to readily understand the workflow and the blocks connected with the ML_Block_Connection stereotype.
As the scope of this work is to formalize the machine learning and not to improve the executable code or to derive the code automatically, the result of the machine learning and the implementation itself are not depicted and left to future work.
§.§ UC2 - 3D Printer Success Evaluation during Printing
The purpose of the application is to detect faulty 3D prints during the printing process by comparing the actual status of the printed model with the intended model.
This use case illustrates the method's applicability to other data sources, such as image data, and the integration of the method into an executable workflow engine.
Additionally, the integration of pre-trained models is depicted by integrating TensorFlow Hub.
The idea of image similarity is based on an image similarity tutorial[<https://towardsdatascience.com/image-similarity-with-deep-learning-c17d83068f59>].
The use case process is described below and illustrated in Figure <ref>.
We adopt the CPEE process engine <cit.> to orchestrate the application process, as the CPEE provides a lightweight and straightforward user interface to orchestrate any application that allows interaction via REST web services.
Figure <ref> shows the workflow of the application, consisting of image generation and printing.
The first three process steps define the slicing of a STL file and the generation of the reference images.
Particularly, a Python script is called that generates the slices based on a given STL file and stores the generated reference images for later comparison and similarity check.
The second part of the process consists of a loop that prints a slice, takes a photo with a camera from the top center of the working area, and calls a similarity script to compare the intended and actual printed model.
The image similarity algorithm is defined using the machine learning formalization method, proposed here.
The defined algorithm provides a similarity index compared to a threshold value.
If the threshold is exceeded, the printing process is aborted, otherwise, it is repeated.
The machine learning model integrated into the printing process is formalized below.
Figure <ref> shows input data consisting of two images: the image sliced from the STL file and the photo from the 3D printer camera.
In contrast to the first use case, the data attributes are not further detailed with stereotypes because the input data do not show any variations, i.e. the format and resolution of the images do not change.
Figure <ref> depicts the scaling of the images such that they have the same dimension.
The conversion parameter L allows comparing the images on a black-and-white basis.
Normalization of the pixels and colors between 0 and 1 is also applied.
The normalization in the block Convert_PixelsAndNormalize should be defined as a new stereotype.
In this case, we show the application of the CustomCode stereotype, allowing for the injection of program code, which allows rapid prototyping.
However, flaws, such as vulnerability or hijacking of the method might lead to reduced understanding and reproducibility.
Additionally, it is not the purpose of the method to insert programmed code.
For further discussion, see Section <ref>.
With respect to potentially wrong use of the method, Figure <ref> depicts the wrong used stereotype CustomCode on top and below the correct use of stereotypes for the same result with a slightly changed code sequence.
Further, the two images are fed to the classification algorithm, as illustrated in Figure <ref>.
The input value Model describes a TensorFlow Hub input, a pre-trained model to classify images.
Finally, the result is measured using cosine distance metrics.
The threshold for canceling the printing is implemented in the workflow and can be adjusted by the user.
Finally, Figure <ref> depicts the execution sequence of the algorithm.
§ USER STUDY
Typical user of the presented method are computer scientists and engineers from various disciplines, depending on the application area.
Therefore, this study aims to assess and compare computer scientists' and mechanical engineers' subjective workload and user experience regarding understanding, modifying, and creating machine learning functions in a model-based method.
Further, the time required for applying changes or creating constructs in SysML is assessed to allow a comparison of the participants based on previous experiences, e.g., programming or modeling prior knowledge.
Since the study and the modeling is conducted using the SysML modeling tool Papyrus[<https://www.eclipse.org/papyrus/>], it is impossible to eliminate distortions due to the usability of the underlying tool, e.g.,
“How to model a block”.
Therefore, the study director will provide verbal assistance if a participant requires support due to the tool's usability.
Large sample sizes are necessary to enable quantitative evaluation, which is not applicable due to resource constraints.
Therefore, the principles of discount usability are applied to test only a small group of customers and to identify the main usability problems by applying small qualitative user studies with three to five users, a detailed scenario, and a think-aloud method <cit.>.
According to <cit.>, a 70% chance to find 80% of the usability issues is given with five users.
However, in literature, there are reports that the increase of five participants to ten significantly changes the amount of found issues <cit.>.
In this respect, a total number of 12 users were tested, equally distributed among the two groups, Computer Scientists (CS) and Mechanical Engineers (ME).
In the following, the experimental setting is illustrated.
Next, an introduction to the evaluation procedure is given, followed by an introduction of the test cases in Section <ref>.
Finally, the results of the user studies are depicted in Section <ref>.
A discussion on the implications from the user study is given in Section <ref>.
§.§ Experimental Setting
The user study was conducted with 12 participants.
Each participant has a university degree (B.Sc., M.Sc., or Ph.D.) and received a basic introduction to programming at university.
Half of the participants are CSs, and half MEs.
Other engineers can serve as potential users and equally valid test users, as well.
However, to obtain a more homogeneous group, engineers are limited to MEs.
Due to the participants' different knowledge in modeling, programming, and data science, a self-assessment of their experience was made at the beginning of the user test.
Table <ref> summarizes the knowledge levels of the participants based on their highest university degree, years of experience, position at the current job, and self-assessment on the three relevant dimensions.
§.§ Evaluation Procedure
The study started with a basic introduction to SysML and an overview of the method introduced in this work, taking approximately 10 minutes and involving the presentation of two predefined block definition diagrams as samples with a focus on the modeling and understanding of a block definition diagram and the application of the introduced stereotypes.
Following this, the users had to perform three tasks, i.e.,
(1) showing that they understand the purpose of the modeling and the basic idea of the method by describing the modeled methods in Figure <ref>, (2) replacing a CSV stereotype with Text-file stereotype and redefining the attribute properties of the text file, and (3) adding a new function by connecting a new block with a particular stereotype to an existing block.
Each of the tasks (1) – (3) is subdivided into sub-activities to allow fine-grained evaluation of the tasks and the performance achieved by the participants.
The sub-activities are presented with their tasks in Table <ref>.
For each participant, the time taken to perform the tasks is recorded.
After each of the three tasks, NASA Task Load Index (NASA-TLX, <cit.>) and the Systems Usability Scale (SUS, <cit.>) questionnaire are filled out by the users to assess the participants' subjective workload and usability.
Before filling out the questionnaire, the users were explicitly told to evaluate the method's usability, not Papyrus's.
§.§ Test Cases
Table <ref> depicts the subtasks to accomplish the tasks of the user study.
Therefore, each subtask is assessed by the study leader to determine whether they are completed correctly or not.
If a user could not find a specific button due to the usability of Papyrus, but could justify why it is being searched for, e.g., “I need to remove a stereotype and add a new one so that a new function is defined”, the task is evaluated as correct.
To achieve reproducibility, the tasks were set exactly with the following wordings:
Task 1 Understanding: Please describe what can be seen in the currently displayed diagram and what function it fulfills. Additionally, please answer the following questions:
* What are the two input files, and in which format?
* What values are stored within CSV_2?
* What is the type of date_date, and how is it represented in the ML model?
* What are the path and encoding of the two input files?
* What are the properties of DataFrame_Merge Stereotype?
Task 2 Function Exchange: Behind the here presented TextFile function, a CSV stereotype is defined.
However, the type is incorrect.
Please change the file type to Text-File.
Additionally, set the encoding to UTF-8 and the path to C:/file.txt.
Task 3 Adding a Function:In the following view, you can see two input files connected to a merge block.
Additionally, a normalization of the merge block is required.
Please add the function for Normalization and set the value of the normalization method to MaxAbsScalar.
§.§ Survey Results
Figure <ref> shows boxplots of the required times for the individual tasks grouped per task and training of the participants in CS or ME.
For Task1, the time required is higher than for Task2 and Task3, whereas Task2 and Task3 shows a comparable average and distribution.
One reason for the higher time for Task1 is that the users had to describe a model and this task is therefore more time-consuming.
It was also observed that repetitive tasks made the users faster, which also came as feedback from the participants.
Further, the dispersion of Task1 for ME is higher compared to CS.
This scatter might be explained because of the varying experience levels of the participants with respect to modeling and data science.
However, there was no correlation between the time spent and the correctness of the execution of the sub-activities.
Regarding the dispersion of CS, interestingly, Tasks 2 and 3 vary more than Task1.
This can mainly be explained by the familiarity with the Papyrus modeling environment.
Thus, participants with more Papyrus experience had completed the tasks much faster than those who used Papyrus for the first time.
Figure <ref> shows the result of the individual tasks in terms of correctness in relation to the subtasks of Table <ref>.
CS perform better for T1 and T2, which can be explained by the extended prior experience regarding UML of CS obtained during university education.
In T3, however, ME perform better.
This can be explained by an outlier value for CS that performs significantly below the average.
The overall accuracy of ME increased with the evolving tasks although the average of T2 is lower than for T1.
The results of the applied NASA-TLX test to indicate the perceived workload of the participants for the specific tasks are presented in Figure <ref>.
The lower the value of a dimension of the NASA-TLX, the lower the perceived workload.
Consequently, a low scale value is seen as positive.
The Effort dimension shows, for example, that with increasing experience or task, the perceived effort decreases.
Further, the frustration increases and the performance decreases compared to T1.
For T3, the standard error is larger than for T1 and T2.
Both might be justified due to the increasing complexity of the tasks.
However, it is a contrast compared to the achieved accuracy in Figure <ref>.
The raw overall scores of the tasks are depicted in Table <ref>.
According to <cit.>, the workload is categorized as `medium', which is the second best score and ranging from 10 to 29 points.
The cumulative results of CS and ME shows a decreasing workload among the evolving tasks.
For CS, the workload appears to be higher than for ME, especially for T3.
As of the user feedback, no justification can be given on the difference between CS and ME.
The results of the SUS test with different rating scales are shown in Table <ref> based on <cit.>.
Figure <ref> presents the SUS score as a boxplot, prepared with an online tool for analyzing SUS questionnaire results <cit.>.
The adjective scale score in the boxplot is aligned with <cit.>, which is based on <cit.>.
The figure highlights that each task achieves the rating good for both CS and ME.
The standard error of CS is slightly higher than for ME, which can also be seen in Table <ref>.
The values of quartile scale shown in Table <ref> are according to <cit.> and acceptability scale according to <cit.>.
ME increased the score in T3, T1 and T2 are equal.
CS decreased the score among the tasks.
However, the changes in the scores are little and therefore not justifiable.
Figure <ref> depicts the percentile scale based on <cit.>.
Since the percentile score is not uniform or normally distributed, a percentile score was created based on 5000 SUS studies.
In this respect, the comparison shows that the tests achieved a percentile between 60 and 79.
T3 ME over performed with 79.
For CS and ME the average percentile is 66.
T1 and T2 for ME have exactly the same value, which is why they are shown as one colour in the Figure.
§ DISCUSSION
This section discusses advantages and potential flaws of the newly introduced method to formalize machine learning tasks. The structure of the section is as follows: First, the metamodel's extension and the stereotypes' proposed structure are discussed. Next, the benefits and shortcomings of the modeling semantic are assessed with a particular focus on the applicability and potential ambiguous interpretation. Next, potential risks of model-driven machine learning and future work are presented. Finally, the implications of the user study are presented and discussed.
§.§ Stereotypes and Structure of the Custom Metamodel
The integration of custom stereotypes has been proven beneficial in the literature <cit.>. In this method, the use of stereotypes to encapsulate and abstract knowledge about machine learning tasks is beneficial as implementation details are hidden, thus supporting communication between different engineers not necessarily experienced in machine learning or programming.
With structuring the stereotypes using packages, a stereotype organization aligned to the CRISP-DM methodology is given, supporting refinements and extension in a fine-grained, hierarchical manner. Particularly, the definition of blackbox and abstract stereotypes allows the description of various functions without the necessity to specify each machine learning function in detail.
In the custom metamodel, custom Enumerations are defined to limit the number of attribute values, which reduces the model's wrong specifications. Another opportunity to reduce the scope of possible selections is to reduce the number of allowed stereotypes, e.g., only inheritance of the abstract stereotype PreProcessing can be assigned as a value for a specific attribute.
However, the filtering of stereotypes requires specific rules that have not yet been integrated or elaborated.
Although various methods are defined using stereotypes, the level of detail might be too little for practical application. DateConversion, for example, can be applied to manifold input values and various outputs, e.g., output representation as a string or Coordinated Universal Time (UTC). Adding multiple DateConversion stereotypes for each case is possible. Still, with a growing number of stereotypes, the complexity of selecting the correct, unambiguous stereotype increases while the maintainability decreases. Similarly, if too many stereotype attributes have to be set, the complexity and the effort for the application increases.
With respect to these uncertainties at the level of detail required for fine-grained definition of machine learning tasks, industrial case studies have to be conducted to elaborate and validate sufficient degree of detail and additionally define future work.
§.§ Complexity of Unambiguous Modeling
The definition of an implementation structure aligned with the CRISP-DM methodology starting from the business understanding and ending with the definition of evaluation and workflows, is promising to be useful due to the integration of a comprehensive and mature methodology in a MBSE method. Additionally, more experienced computer scientists aware of CRISP-DM can rely on experiences and the benefits of CRISP-DM. Furthermore, in practice, one third of data scientists lack business understanding and communication skills<cit.>, which can be supported by the model-based method of CRISP-DM.
Each block implementing a ML stereotype within the implementation structure can be seen as an encapsulated subtask. Each subtask provides an output that can be used as input for another block. However, the given method does not explicitly specify the output of a block. Therefore, the output is defined by the implementing computer scientist, which may lead to different results due to the range of experience of the decisions and the laziness of the semantics, which allows to create arbitrary associations that may not be implementable.
In this respect, future work requires the integration of model checking to reduce orphan associations, infeasible implementations and unwanted side effects on changing associations.
Despite the ambitiousness of the modeling and the potential errors of the associations, the method supports the elaboration and definition of machine learning tasks from early development, which is beneficial. The authors believe that the flaws in the beginning of the method are getting less with the application due to the possibility of reusing certain parts of the formalization. The reuse additionally allows to preserve knowledge and contribute to standardization in the modeling and implementation, which further leads to a reduction of cost and risk in the design <cit.> and the maintenance of machine learning applications.
§.§ Potential of Model-Driven Machine Learning
The given proposal to describe machine learning tasks using a model-based method has some benefits but also disadvantages.
A core disadvantage is the initial effort to introduce stereotypes and formalize the model.
In this respect, traditional programming might be less time consuming and therefore, users might use the CustomCode stereotype to inject code.
However, it is not the purpose of the method to insert code injection due to vulnerability risks and the reduced documentation and understanding by others.
Consequently, future work is required to investigate an extension of the method that allows to generate code from the model but with limitations so that code injections like described in the use case are not possible.
Another disadvantage of the stereotypes is the potential effort for maintenance if interfaces are proprietary or rapidly changing, e.g. due to configuration changes or replacement of machines.
Closely related, for huge projects, the complexity of the resulting models might be very high, including potential errors in the model or ambiguous associations, which might be very hard to find and thus lead to additional communication effort.
Nevertheless, the shortcoming of a complex ramp-up might also be a benefit in the end due to the possibility of introducing model libraries containing well-defined models, leading to standardized parts that can be reused. Further, the method allows to use the formalization as documentation of the implemented technologies that improve the maintainability and extendability for various engineers. Additionally, with further investigations regarding model validation and model debugging features, errors in the semantics can be found and repaired without actually implementing the machine learning application. However, to use this efficiently, the integration into advanced model lifecycle management <cit.> might be necessary to allow collaborative working.Due to the non-programming description of machine learning, the method is promising to increase the communication among various disciplines. In particular, with the integration of the general-purpose language SysML and the intersection of CRISP-DM and MBSE, the heterogeneous communities are broadly supported, which favors the implementation of machine learning in industrial practice and supports to shift knowledge in enterprises regarding machine learning. Further, the method can be integrated into early product development due to the abstract definition that allows to foresee various data interfaces which might have been forgotten during the development. This potentially leads to increased accuracy of the machine learning applications and might reduce failing machine learning projects, which is a well-known problem in industries <cit.>.
In this section, the advantages and potential shortcomings of the method have been shown. However, the key advantages of formalized knowledge was not detailed yet. The machine-readable artifacts (models) are usable with model transformations so to generate executable code, such as a Python script.
Particularly, each ML stereotype consists of knowledge to describe a specific subtask, which is a function in a programming language, e.g. a date conversion. The function parameters are defined in the stereotype (mandatory parameters) or on the block (optional parameters). Since stereotypes have to be uniquely named, each can be mapped to a generic code template in a dedicated programming language, e.g. Python. The templates consist of fixed code and generic parts with placeholders, which are filled based on the model's attributes. The state diagram defines the execution order; all blocks are a well-encapsulated functionality; hence, each block can generate a single code block in an Jupyter Notebook[<https://ipython.org/notebook.html>]. With the automatic derivation of executable machine learning code, the effort for the documentation and implementation is reduced and potentially lead to less errors in the interpretation.
In this respect, future work consists of implementing a proof of concept showing that a derivation and decomposition of formalized machine learning knowledge is beneficial.
§.§ Implications from the User Study
The user study was conducted with two groups that are representative for using the method presented in this work in practice.
The results show that the majority of the tasks were successfully accomplished.
From a study perspective, the users could perform each task without additional guidance on the modeling method.
Still, problems occurred with the user-interface of Papyrus, e.g., expanding a group of elements to select a block element for modeling. However, learning effects could be observed among the tasks on both CS and ME.
The assessment of the NASA-TLX showed that the mental demand for each task is comparable.
A similar observation can be made for the level of frustration, which is slightly lower for the first task.
Contrary to expectations, the participants perceived the effort as decreasing.
With regard to the task, the effort for modeling should have been higher than for understanding a model.
Nevertheless, it can be implied that both CS and ME can use the method in terms of task load without being more strained.
From an usability perspective, the method achieved good results.
Users rated especially the consistency of the method as very high.
Comparing the method with others using the percentile curve, it achieved a rank over 66.
However, the first positive results could be due to some shortcomings in the study design.
In particular, the demand for rating Papyrus might have a larger impact on the study design than expected.
The usability feeling of the users is more dedicated to the experience with Papyrus than to the method, although it was said before to focus on the method.
In this respect, a paper prototype where users had to move paper snippets on the table might have been more valuable.
Furthermore, most of the participants reported their data science knowledge as low and yet were able to explain what happens in a given model or create a model building block themselves.
However, modeling their own data science application might not be possible, as the general understanding of data science is too low.
Nevertheless, it can be seen as a result of the study that the modeled knowledge can be used as a communication medium.
Therefore, it should also be possible for non-data scientists to perform a plausibility analysis, as they can gain an understanding of the process without understanding programming code.
However, this would need to be evaluated in a further study.
Similarly, an evaluation of the results with the help of a larger study should be sought.
§ CONCLUSIONS
In this work machine learning task definition using means of SysML is depicted. Particularly, the metamodel of SysML is extended with stereotypes to reflect functions from the machine learning domain. Additionally, the CRISP-DM methodology is used as basis for the structure of the models to organize the development with specific viewpoints.
The method is evaluated in a case study showing the integration of machine learning task definition in a cyber-physical system as well as in a case study where a workflow engine is integrated for the interruption of a 3D printer task if the aimed result cannot be achieved.
Additionally, a user study is performed to collect an overview of the perceived workload using NASA-TLX questionnaire and to check usability of the system using the SUS questionnaire.
The findings of the evaluation showed that the entire workflow of a machine learning solution can be reflected using SysML. Additionally, the connection between the domain of (mechanical/electrical) engineers and machine learning experts is shown.
With the MBSE integration and the involvement of various stakeholders from different disciplines, an improvement in communication is expected as shown in a user study.
The user study implies that non-experts in data science can use the method as medium of communication.
Future work consists of the extension of the method to automatically derive executable machine learning code acting as a basis for the implementation. In addition, a case study must be conducted to develop a minimum level of detail required to sufficiently define a machine learning model that can be used for communication, and thus guide the implementation of the executable code through the formalization of the machine learning model.
|
http://arxiv.org/abs/2307.05571v1 | 20230710071220 | Average of Central L-values for GL(2)$\times$GL(1), Hybrid Subconvexity, and Simultaneous Nonvanishing | [
"Liyang Yang"
] | math.NT | [
"math.NT"
] |
plain
thmTheorem[section]
cor[thm]Corollary
thmyTheorem
thmxthm
coryCorollary
corxthm
hy[thm]Hypothesis
*thmaTheorem A
*corbCorollary B
*thmcTheorem C
lemma[thm]Lemma
prop[thm]Proposition
conj[thm]Conjecture
fact[thm]Fact
claim[thm]Claim
definition
defn[thm]Definition
example[thm]Example
remark
remark[thm]Remark
equationsection
]Average of Central L-values for GL(2)×GL(1), Hybrid Subconvexity, and Simultaneous Nonvanishing
We employ a regularized relative trace formula to establish a second moment estimate for twisted L-functions across all aspects over a number field. Our results yield hybrid subconvex bounds for both Hecke L-functions and twisted L-functions, comparable to the Weyl bound in suitable ranges. Moreover, we present an application of our results to address the simultaneous nonvanishing problem.
[
Liyang Yang
August 12, 2023
===================
§ INTRODUCTION
Central L-values of modular forms play important roles in number theory and arithmetic geometry. The relative trace formula, introduced in <cit.>, has emerged as a powerful analytic tool for studying the average behavior of central L-values for holomorphic cusp forms. Building upon this, <cit.> extended the analysis to include Hilbert modular forms over total real fields. In this article, we employ a regularized relative trace formula to investigate central values of general automorphic L-functions for GL(2)×GL(1) over a number field. Our approach yields several new results, including a second moment estimate that encompasses all aspects and incorporates stability concepts from <cit.>, hybrid-type subconvexity bounds for both Hecke L-functions and twisted L-functions that can rival the strength of the Weyl bound in the appropriate range, and an improved bound on simultaneous nonvanishing in the level aspect.
§.§ Hybrid Second Moment Involving Stability
Our first result is the following bound towards the second moment of twisted L-functions.
Let F be a number field with ring of adeles 𝔸_F. Let χ be a Hecke character of 𝔸_F^×/F^× with arithmetic conductor Q=C_(χ). Let 𝔐 be an integral ideal of norm M. For v|∞, let c_v, C_v, T_v>0. Set T=∏_v|∞T_v. Let Π_∞=⊗_v|∞Π_v be an irreducible admissible generic representation of GL(2)/F_∞. Let 𝒜_0(Π_∞,𝔐;χ_∞,ω) be the set of cuspidal automorphic representations π=⊗_vπ_v of GL(2)/F with central character ω such that π_=⊗_v<∞π_v has arithmetic conductor dividing 𝔐, and π_v⊗χ_v≃Π_v has uniform parameter growth of size (T_v;c_v,C_v), for all v|∞, cf. §<ref>. Then
∑_π∈𝒜_0(Π_∞,𝔐;χ_∞,ω)|L(1/2,π×χ)|^2≪ (TMQ)^ε(TM+T^1/2Q·1_M≪ Q^2(M,Q)),
where the implied constants depend on ε, F, c_v, and C_v, v|∞.
Theorem <ref> is an analog and extension of the results in <cit.> from Hilbert modular forms on an anisotropic quaternion algebra to cuspidal automorphic representations of GL(2) over general number fields. The estimate (<ref>) incorporates the explicit dependence on the spectral parameter T by utilizing Nelson's test function at the archimedean places. Notably, there are no restrictions on the arithmetic conductors, allowing M and Q to be arbitrary.
The condition 1_M≪ Q^2(M,Q) in (<ref>) captures the stability of regular orbital integrals, akin to the treatment in <cit.>, although the specific regular orbital integrals under consideration differ significantly.
For F=ℚ, with Π_∞ being a holomorphic discrete series of SL(2), and χ as a Dirichlet character, Theorem <ref> implies the following.
Let k≥ 2 and N≥ 1. Let χ be a primitive Dirichlet character modulo q. Then
∑_f∈ℱ_k^(N)|L(1/2,f×χ)|^2≪ (kNq)^ε(kN+k^1/2q·1_N≪ q^2(N,q)),
where the implied constant depends only on ε. Here ℱ_k^new(N) is an orthogonal basis of normalized new forms that are holomorphic Hecke eigenforms with weight k and level N, and have trivial nebentypus.
Note that (<ref>) improves <cit.> by explicating the dependence on k, and allowing for arbitrary values of N and q.
§.§ Hybrid Weyl Subconvex Bounds
Dropping all but one terms on the left hand side of (<ref>) we then obtain the following hybrid bound for twisted L-functions.
Let F be a number field with ring of adeles 𝔸_F. Let π be either a unitary cuspidal automorphic representation of GL(2)/F or a unitary Eisenstein series. Let χ be a Hecke character of 𝔸_F^×/F^×. Suppose that π_v⊗χ_v has uniform parameter growth of size (T_v;c_v,C_v), for all v|∞, cf. §<ref>. Then
L(1/2,π×χ)≪ C(π⊗χ)^ε[T^1/2C_(π)^1/2+T^1/4C_(χ)^1/2],
where the implied constant depends on ε, F, c_v, and C_v, v|∞. In particular,
L(1/2,π×χ)≪_π_∞,χ_∞,F,ε C_(π×χ)^1/6+ε
if (C_(π),C_(χ))=1 and C_(π)^1-ε≪ C_(χ)≪ C_(π)^1+ε.
When considering a CM extension E/F, where π corresponds to a Hilbert modular form over F and σ_Ω represents the theta series associated with an ideal class group character Ω of E, a hybrid variant of (<ref>) for L(1/2, π×σ_Ω) has been established in <cit.> through the utilization of a relative trace formula on a quaternion algebra. This relative trace formula, together with a selection of local test function, has further been employed in <cit.> to derive a hybrid subconvexity outcome in a similar fashion.
In the case of GL(2)×GL(1) over F=ℚ, the Weyl bound L(1/2,π×χ)≪ C_(χ)^1/3+ε was established by <cit.> for a fixed cusp form π of PGL(2) and a quadratic Dirichlet character χ. This result was further generalized by <cit.>, where the Weyl bound L(1/2,π×χ)≪ C_(π×χ)^1/6+ε is proven under the conditions χ^2≠ 1, π has a level dividing C_(χ), and π has a central character χ^2. In particular, C_(π) is not coprime to C_(χ). Consequently, (<ref>) addresses a complementary case to <cit.>.
By taking ω=η^2 for some Hecke character η and π=η⊞η, we obtain the following bound for Hecke L-functions.
Let F be a number field with ring of adeles 𝔸_F. Let η and χ be Hecke character of 𝔸_F^×/F^× with coprime arithmetic conductors. Then
L(1/2,ηχ)≪min{C_(η)^1/2+ε+C_(χ)^1/4+ε,C_(η)^1/4+ε+C_(χ)^1/2+ε},
where the implied constant depends on F, ε, η_∞, and χ_∞. In particular,
L(1/2,χ)≪_F,χ_∞,εC_(χ)^1/6+ε
if χ=χ_1χ_2 with (C_(χ_1),C_(χ_2))=1 and C_(χ_1)^2-ε≪ C_(χ_2)≪ C_(χ_1)^2+ε.
§.§ Applications to Simultaneous Nonvanishing
Corollary <ref> serves as a versatile alternative to multiple third moment estimates in certain applications. It replaces Young's third moment bound (cf. <cit.>) in <cit.> and provides a substantial improvement to the level aspect simultaneous nonvanishing result (cf. <cit.>), replacing Petrow-Young's third moment estimate <cit.> with the use of Corollary <ref>.
Let k∈{2,3,4,5,7}. Let N≥ 2 be a prime. Denote by ℱ_2k^new(N) an orthogonal basis of normalized new forms that are holomorphic Hecke eigenforms with weight 2k and level N, and have trivial nebentypus. Let f∈ℱ_2k^new(N). Then there exists a nontrivial primitive quadratic character χ such that
#{g∈ℱ_2k^(N): L(1/2,f×χ)L(1/2,g×χ)≠ 0}≫_εN^1-ε,
where the implied constant depends on ε.
The lower bound N^1-ε in Corollary <ref> significantly improves the main result in <cit.>, where the lower bound achieved was N^1/2-ε.
From (<ref>) we obtain subconvex bounds for L(1/2,f×χ)
in the range q^δk^δ-1≪ N≪ q^2-δk^-δ, δ>0. This generalizes <cit.>.
§.§ Discussion of the Proofs
Let A=(GL(1), 1), G=GL(2), and G=PGL(2). Let f be a nice function on G(𝔸_F). Denote by
(g_1,g_2)=∑_γ∈G(F)f(g_1^-1γ g_2), g_1, g_2∈ G(𝔸_F)
the associated kernel function, which also admits a spectral expansion. By substituting these expansions of (x,y) into the integral
∫_A(F)\ A(𝔸_F)∫_A(F)\ A(𝔸_F)(x,y)χ(x)χ(y)d^×xd^×y,
we obtain a formal equality between two divergent expressions. To regularize it, we establish an identity between two holomorphic functions on ℂ^2 in the form of
J_^,(f,s,χ)=J_^,(f,s,χ), s∈ℂ^2,
where evaluating this identity at 𝐬=(0,0) provides a regularization of (<ref>).
§.§.§ The spectral side: a lower bound
We will prove a lower bound
J_^,(f,0,χ)≫ T^-1/2-ε(MQ)^-ε∑_π∈𝒜_0(Π_∞,𝔐;χ_∞,ω)|L(1/2,π×χ)|^2.
A more comprehensive version that includes the continuous spectrum is given by Theorem <ref> in §<ref>.
§.§.§ The geometric side: an upper bound
According to types of orbital integrals, we decompose the geometric side into three integrals
J_^,(f,0,χ)=J^_,(f,χ)+J^_,(f,χ)+J^,2_,(f,0,χ).
* The terms J^_,(f,χ) and J^_,(f,χ) correspond to irregular orbital integrals, exhibiting an asymptotic magnitude of T^1/2+o(1)M^1+o(1).
* The term J^,2_,(f,0,χ) represents the contribution from regular orbital integrals, which constitutes the main focus of this paper. We establish that it is bounded by ≪ T^εM^εQ^1+ε·1_M≪ Q^2(M,Q).
Based on the above estimates, we obtain an upper bound for the geometric side
J_^,(f,0,χ)≪ T^1/2+εM^1+ε+T^εM^εQ^1+ε·1_M≪ Q^2(M,Q),
Using equations (<ref>), (<ref>), and (<ref>), we establish Theorem <ref> for the case where π is cuspidal.
§.§.§ Some Remarks
The approach utilized in this work exhibits similarities to that of <cit.>, albeit with notable distinctions in the treatment of test functions at ramified places. In <cit.>, the focus is primarily on the case of joint ramification, where Q| M, resulting in relatively simpler regular orbital integrals that can be further improved through nontrivial bounds on specific character sums. However, in the case of totally disjoint ramification, where (M,Q)=1, the regular orbital integrals do not exhibit any oscillatory behavior, and the trivial bound becomes optimal. This paper addresses the most general situation, allowing M and Q to take arbitrary values. Another difference from the aforementioned work is that we evaluate the expressions at s=(0,0) (instead of some s_0=(s_0,s_0) with s_0>0) in order to compute the second moment over the family. This necessitates careful consideration of singularity matching when computing the main term J^_,(f,χ)+J^_,(f,χ).
By employing a straightforward `trivial' estimate of the regular orbital integrals, we establish convexity in the χ-aspect and achieve strong hybrid subconvexity. This represents one of the key advantages of the relative trace formula. The robust nature of this approach holds promise for deriving bounds for higher rank Rankin-Selberg L-functions in the level aspect. In future work, we intend to extend the techniques presented in this paper to higher ranks, building upon the general regularized relative trace formula introduced in <cit.>.
§.§ Outline of the Paper
§.§.§ The Regularized Relative Trace Formula
In §<ref>, we introduce the notations that will be consistently used throughout the paper, along with setting up the local and global data. Additionally, we define the test functions that will play a crucial role in the relative trace formula.
Moving to §<ref>, we derive the regularized relative trace formula summarized in Theorem <ref> and Corollary <ref> in §<ref>.
§.§.§ The Spectral Side
In §<ref>, we explore the spectral side J_^,(f,0,χ). Its meromorphic continuation is obtained in §<ref>. By combining this with the local estimates developed in §<ref>–§<ref>, we establish a lower bound for the spectral side (cf. Theorem <ref>) in terms of the second moment of central L-values.
§.§.§ The Geometric Side
In §<ref>–§<ref> we handle the geometric side
J_^,(f,0,χ)=J^_,(f,χ)+J^_,(f,χ)+J^,2_,(f,0,χ).
* The small cell orbital integral J^_,(f,χ), one of the main terms, is addressed in Proposition <ref> in §<ref>, utilizing local estimates from §<ref>–§<ref>.
* The dual orbital integral J_,^(f,χ) is bounded by Proposition <ref> in §<ref>. This integral is considered `dual' to J_,^(f,χ) through Poisson summation and contributes as the other main term.
* The regular orbital integrals J^,2_,(f,0,χ) present the most challenging aspect of the geometric side J_^,(f,0,χ). Their behaviors are outlined in Theorem <ref> in §<ref>.
§.§.§ Proof of Main Results
With the aforementioned preparations, we are able to prove the main results in §<ref>. In §<ref>–§<ref> we put estimates from the spectral and geometric side all together, obtaining Theorem <ref>, which yields Theorem <ref>.
§.§ Notation
§.§.§ Number Fields and Measures
Let F be a number field with ring of integers 𝒪_F. Let N_F be the absolute norm. Let 𝔒_F be the different of F. Let 𝔸_F be the adele group of F. Let Σ_F be the set of places of F. Denote by Σ_F, (resp. Σ_F,∞) the set of nonarchimedean (resp. archimedean) places. For v∈Σ_F, we denote by F_v the corresponding local field and 𝒪_v its ring of integers. For a nonarchimedean place v, let 𝔭_v be the maximal prime ideal in 𝒪_v. Given an integral ideal ℐ, we say v|ℐ if ℐ⊆𝔭_v. Fix a uniformizer ϖ_v∈𝔭_v. Denote by e_v(·) the evaluation relative to ϖ_v normalized as e_v(ϖ_v)=1. Let q_v be the cardinality of 𝒪_v/𝔭_v. We use v|∞ to indicate an archimedean place v and write v<∞ if v is nonarchimedean. Let |·|_v be the norm in F_v. Put |·|_∞=∏_v|∞|·|_v and |·|_=∏_v<∞|·|_v. Let |·|_𝔸_F=|·|_∞⊗|·|_. We will simply write |·| for |·|_𝔸_F in calculation over 𝔸_F^× or its quotient by F^×.
Let ψ_ℚ be the additive character on ℚ\𝔸_ℚ such that ψ_ℚ(t_∞)=exp(2π it_∞), for t_∞∈ℝ↪𝔸_ℚ. Let ψ_F=ψ_ℚ∘_F, where _F is the trace map. Then ψ_F(t)=∏_v∈Σ_Fψ_v(t_v) for t=(t_v)_v∈𝔸_F. For v∈Σ_F, let dt_v be the additive Haar measure on F_v, self-dual relative to ψ_v. Then dt=∏_v∈Σ_Fdt_v is the standard Tamagawa measure on 𝔸_F. Let d^×t_v=ζ_F_v(1)dt_v/|t_v|_v, where ζ_F_v(·) is the local Dedekind zeta factor. In particular, (𝒪_v^×,d^×t_v)=(𝒪_v,dt_v)=N_F_v(𝔇_F_v)^-1/2 for all finite place v. Moreover, (F\𝔸_F; dt_v)=1 and (F\𝔸_F^(1),d^×t)=s=1 ζ_F(s), where 𝔸_F^(1) is the subgroup of ideles 𝔸_F^× with norm 1, and ζ_F(s)=∏_v<∞ζ_F_v(s) is the finite Dedekind zeta function. Denote by F^×\𝔸_F^(1) the Pontryagin dual of F^×\𝔸_F^(1).
Note that at a ramified place v|𝔇_F_v, the conductor of ψ_v is precisely the inverse different 𝔒_F_v^-1. Write 𝔒_F_v^-1=ϖ_v^-d_v𝒪_v for some integer d_v≥ 1. Set ψ=⊗_v∈Σ_Fψ_v, where ψ_v is the additive character of F\𝔸_F defined by
* at v|𝔇_v, ψ_v(x):=ψ_v(ϖ_v^-d_vx), where x∈ F_v;
* at v|∞ or v∤𝔇_v, ψ_v(x):=ψ_v(x), where x∈ F_v.
Then ψ is unramified everywhere. Let D=N_F/ℚ(𝔇_F) be the absolute discriminant.
§.§.§ Reductive Groups
For an algebraic group H over F, we will denote by [H]:=H(F)\ H(𝔸_F). We equip measures on H(𝔸_F) as follows: for each unipotent group U of H, we equip U(𝔸_F) with the Haar measure such that, U(F) being equipped with the counting measure and the measure of [U] is 1. We equip the maximal compact subgroup K of H(𝔸_F) with the Haar measure such that K has total mass 1. When H is split, we also equip the maximal split torus of H with Tamagawa measure induced from that of 𝔸_F^×.
In this paper we set A=(GL(1),1), and G=GL(2). Let B be the group of upper triangular matrices in G.
Let G=Z\ G and B_0=Z\ B, where Z is the center of G. Let T_B be the diagonal subgroup of B. Then A≃ Z\ T_B. Let N be the unipotent radical of B. Let K=⊗_vK_v be a maximal compact subgroup of G(𝔸_F), where K_v=U_2(ℂ) is v is complex, K_v=O_2(ℝ) if v is real, and K_v=G(𝒪_v) if v<∞. For v∈Σ_F,, m∈ℤ_≥ 0, define
K_v[m]:={[ a b; c d ]∈ G(𝒪_v): c∈𝔭_v^m}.
§.§.§ Automorphic Data
Let s=(s_1, s_2)∈ℂ^2.
Let ω∈F^×\𝔸_F^(1). Denote by 𝒜_0([G],ω) the set of cuspidal representations on G(𝔸_F) with central character ω.
For η_1, η_2∈F^×\𝔸_F^(1), let (η_1⊗η_2) be the unitary parabolic induction from B(𝔸_F) to G(𝔸_F) associated with η_1⊗η_2, and let η_1⊞η_2 be Langlands sum.
Let Φ∈𝒮(𝔸_F) with Fourier transform Φ and let ω”=|·|^inα be a unitary character of 𝔸_F^×. Define an Eisenstein series
E(s,x;Φ,ω”)=∑_δ∈ B_0(F)\G'(F)∫_𝔸_F^×Φ(zηδ x)| zx|^sω”(z)d^×z
on [G']. Then E(s,x;Φ,ω”) converges absolutely in (s)>1 and admits a meromorphic continuation to ℂ, given by
E(s,x;Φ,ω”)=E_+(s,x;Φ,ω”)+E_+^∧(s,x;Φ,ω”)+E_(s,x;Φ,ω”),
where
E_(s,x;Φ,ω”):=-Φ(0)| x|^s/s+iα+Φ(0)| x|^s-1/s-1+iα
E_+(s,x;Φ,ω”):=∑_δ∈ B_0(F)\G'(F)∫_|z|≥ 1Φ(zηδ x)| zx|^sω”(z)d^×z,
E_+^∧(s,x;Φ,ω”):=∑_δ∈ B_0(F)\G'(F)∫_|z|≥ 1Φ(zηδ x)| zx|^1-sω”^-1(z)d^×z.
Moreover, E_+(s,x;Φ,ω”) and E_+^∧(s,x;Φ,ω”) converges absolutely for all s.
§.§.§ Other Conventions
For a function h on G(𝔸_F), we define h^* by assigning h^*(g)=h(g^-1), g∈ G(𝔸_F). Let F_1(s), F_2(s) be two meromorphic functions. Write F_1(s)∼ F_2(s) if there exists an entire function E(s) such that F_1(s)=E(s)F_2(s). Denote by α≍β for α, β∈ℝ if there are absolute constants c and C such that cβ≤α≤ Cβ.
Throughout the paper, we adhere to the ε-convention, wherein ε denotes a positive number that can be chosen arbitrarily small, though it may vary between different instances.
Acknowledgements
I am grateful to Dinakar Ramakrishnan for his helpful discussions. I would also like to extend my thanks to Caltech for their warm hospitality during my visit, where this paper was written.
§ CHOICE OF THE TEST FUNCTION
The notations introduced in this section will be extensively utilized throughout the remainder of this paper.
§.§ Intrinsic Data
Let F be a number field. Let χ=⊗_vχ_v and ω=⊗_vω_v be primitive unitary Hecke characters of F^×\𝔸_F^×. Let 𝔐 be an integral ideal of norm |𝔐|:=N_F(𝔐).
§.§.§ Analytic Conductor of Hecke Characters
Let C(χ):=⊗_v∈Σ_FC_v(χ) be the analytic conductor of χ, where each local conductor C_v(χ) is defined as follows.
* For F_v≃ℝ, χ_v=^n_v'|·|^iκ_v, n_v'∈{0,1}, we define
C_v(χ)=1+|n_v'+iκ_v/2|.
* For F_v≃ℂ, and χ_v(a)=(a/|a|)^n_v'|a|^2iκ_v, a∈ F_v^×, we define
C_v(χ):=(1+|iκ_v+|n_v'|/2|)^2.
* For v<∞, let n_v the exponent of χ_v, namely, r_χ_v is the smallest nonnegative integer such that χ_v is trivial over 1+ϖ_v^r_χ_v𝒪_v^× but not over 1+ϖ_v^r_χ_v-1𝒪_v^×. Let C_v(χ)=q_v^r_χ_v.
Denote by C_∞(χ):=⊗_v|∞C_v(χ) and C_(χ):=⊗_v<∞C_v(χ).
§.§.§ Analytic Conductor of Automorphic Representations of GL(2)/F
Let π=⊗_vπ_v be an automorphic representation of G(𝔸_F) with central character ω_π=ω=⊗_vω_v. Let C(π):=⊗_vC_v(π) be the analytic conductor of π, where each local conductor C_v(π) is defined as follows.
* Let v<∞. We denote by r_π_v≥ 0 the exponent of π_v, which is the least integer such that π_v has a vector that is K_v[r_π_v]-invariant (as defined in (<ref>)). The local conductor of π_v is defined as C_v(π):=q_v^r_π_v.
* For v|∞, the local L-function of π_v can be expressed as a product of shifted Gamma factors, given by L_v(s,π_v)=Γ_v(s+β_1,v)Γ_v(s+β_2,v), where β_1,v, β_2,v∈ℂ, and Γ_v represents the Gamma function over F_v. Let
C_v(π):=[(1+|β_1,v|)(1+|β_2,v|)]^[F_v:ℝ].
Let C_(π)=∏_v<∞C_v(π) be the arithmetic conductor of π and let C_∞(π)=∏_v|∞C_v(π) be the archimedean conductor of π.
§.§.§ Uniform Parameter Growth
Let Π_∞=⊗_v|∞Π_v be an irreducible admissible generic representation of GL(2)/F_∞. For v|∞, let L_v(s,Π_v)=Γ_v(s+γ_1,v)Γ_v(s+γ_2,v) be the associated L-factor of Π_v.
For v|∞, we say that Π_v has uniform parameter growth of size (T_v;c_v,C_v) for some constants c_v and C_v, and parameters T_v, if c_vT_v≤ |γ_j,v|≤ C_vT_v.
§.§.§ Ramification Parameters
For v∈Σ_F,, let e_v(·) be the normalized evaluation of F_v such that e_v(ϖ_v)=1. Following the notation in §<ref>, let r_χ_v (resp. r_ω_v) be the exponent of χ_v (resp. ω_v). We set m_v:=e_v(𝔐) and n_v:=r_χ_v. Let Σ_^+:={v∈Σ_F_: m_v≥ n_v≥ 1}, and Σ_^-:={v∈Σ_F_: m_v< n_v, n_v≥ 1}. Let K_v[m_v] and K_v[n_v] be defined by (<ref>).
Denote by 𝔔=∏_v<∞𝔭_v^n_v. For simplicity we write Q=C_(χ), M=|𝔐|:=N_F(𝔐), and M'=C(ω_). Suppose that Q>1. Note that M'| M.
§.§.§ The Family of Automorphic Forms
Let c_v and C_v be positive constants for each v|∞, and let T_v>0. In this paper, we will vary T_v as needed, while keeping c_v and C_v fixed. Let T=∏_v|∞ T_v.
For v|∞, let Π_v be an irreducible admissible generic representation of GL(2)/F_v, which uniform parameter growth of size (T_v;c_v,C_v), cf. §<ref>.
* Let 𝒜_0(Π_∞,𝔐;χ_∞,ω) be the set of cuspidal automorphic representations π=⊗_vπ_v of GL(2)/F such that
* π has central character ω,
* for all v<∞, π_v has a K_v[m_v]-invariant vector, i.e., r_π_v≤ e_v(𝔐).
* π_v⊗χ_v≃Π_v at each v|∞.
Note that Weyl law yields #𝒜_0(Π_∞,𝔐;χ_∞,ω)=(T|𝔐|)^1+o(1).
* Let 𝒳_0(Π_∞,𝔐;χ_∞,ω) be the set of Hecke characters η=⊗_vη_v∈F^×\𝔸_F^(1) such that
* for all v<∞, the representation η_v⊞ω_vη_v has a K_v[m_v]-invariant vector, i.e., r_η_v+r_ω_vη_v≤ m_v,
* η_vχ_v⊞ω_vη_vχ_v≃Π_v at each v|∞.
By <cit.> there exists some
d'∈ [10^-1exp(-3√(log T^2MQ^2)),exp(-3√(log T^2MQ^2))],
which may be determined by π and χ,
such that for all s with |s-1/2|=d',
|L(1/2,π×χ)|≪exp(log^3/4C(π×χ))· |L(s,π×χ)|.
Here the implied constant depends only on F.
§.§.§ Other Notations
For a function h on G(𝔸_F) or G(F_v), v∈Σ_F, define h^*(g)=h(g^-1) and
(h*h^*)(g)=∫ h(gg'^-1)h^*(g')dg'=∫ h(gg')h(g')dg'.
§.§ Construction of Test Functions
We construct a test function f on G(𝔸_F) using the following procedure:
* For the archimedean places (cf. §<ref>), we rely on Nelson's work <cit.> (cf. §1.5.2 and §14 on p.80) and follow the approach described in <cit.>, §1.10. Additional information can be found in <cit.>, Part 2.
* For the finite places, we employ the test function constructed in <cit.>, which involves a double average over unipotent translations weighted by characters (cf. §<ref>).
§.§.§ Construction of f_∞
Let v|∞. Recall that Π_v has uniform parameter growth of size (T_v;c_v,C_v) (cf. Definition <ref> in §<ref>). Then Π_v has uniform parameter growth of size (T_v;c_v/2,2C_v), where s_0 is the parameter defined by (<ref>) in §<ref>.
Let 𝔤 (resp. 𝔤') be the Lie algebras of G(F_v) (resp. A(F_v)), with imaginal dual 𝔤̂ (resp. 𝔤̂'). One can choose an element τ∈𝔤̂ with the restriction τ'=τ|_A∈𝔤̂', so that τ (resp. τ') lies in the coadjoint orbit 𝒪_Π_v of Π_v (resp. 𝒪_1_v of 1_v the trivial representation of A(F_v)). Let f̃^∧_v: 𝔤̂→ℂ be a smooth bump function concentrated on {τ+(ξ,ξ^): ξ≪ T_v^1/2+ε, ξ^≪ T_v^ε}, where ξ lies in the tangent space of 𝒪_Π_v at τ, and ξ^ has the normal direction. Let f̃_v∈ C_c^∞(G(F_v)) be the pushforward of the Fourier transform of f̃_v^∧ truncated at the essentially support, namely,
f̃_v⊆{g∈ G(F_v): g=I_n+1+O(T_v^-ε), ^*(g)τ=τ+O(T_v^-1/2+ε)},
where the implied constants rely on c_v and C_v.
Then, in the sense of <cit.>, §2.5, the operator π_v(f̃_v) is approximately a rank one projector with range spanned by a unit vector microlocalized at τ. Let
f_v(g):=f_v(g,χ_v)*f_v(g,χ_v)^*,
where v|∞, g∈ G(F_v), and
f_v(g,χ_v):=χ_v( g)∫_Z(F_v)f̃_v(zg)ω_v(z)d^×z.
Due to the support of f̃, the function f_v(g) is non-zero unless | g|_v > 0. Therefore, (| g|_v) = 1. As a result, the function f_v is smooth on G(F _v).
§.§.§ Application of Transversality
By definition, one has (cf. (14.13) in <cit.>)
f̃_v_∞≪_ε T_v^1+ε, v|∞,
where ·_∞ is the sup-norm. For g∈G(F_v), we may write
g=[ a b; c d ]∈ G(F_v), g^-1=[ a' b'; c' d' ]∈ G(F_v).
Define
d_v(g):=min{1, |d^-1b|_v+ |d^-1c|_v+ |d'^-1b'|_v+|d'^-1c'|_v }, if dd'≠ 0,
1, if dd'=0.
Let notation be as above. Then there is a fixed neighborhood 𝒵 of the identity in A(F_v) with the following property. Let g be in a small neighborhood of I_n+1 in G(F_v). Let δ_v>0 be small. Then
({z∈𝒵: (gzτ, A(F_v)τ)≤δ_v})≪δ_v/d_v(g).
Here (⋯) denotes the infimum over g'∈ A(F_v) of gzτ-g'τ, where · is a fixed norm on 𝔤̂.
Proposition <ref> (with δ_v=T_v^-1/2+ε) will be used to detect the restriction ^*(g)τ=τ+O(T_v^-1/2+ε) in the support of f̃_v. By (<ref>), (<ref>), and (<ref>),
|f̃_v(g)|≪ T^1+ε·1_|^*(g)τ-τ|≪ T_v^-1/2+ε·1_|g-I_2|≪ T_v^-ε·{1,T_v^-1/2+ε/d_v(g)}.
§.§.§ Finite Places
For v∈Σ_F,, we define a function on G(F_v), supported on Z(F_v)\ K_v[m_v], by
f_v(z_vk_v;ω_v)=(K_v[m_v])^-1ω_v(z_v)^-1ω_v(E_2,2(k_v))^-1,
where K_v[m_v] is the image of K_v[m_v] in G(F_v), and E_2,2(k_v) is the (2,2)-th entry of k_v∈ K_v[m_v]. For g_v∈ G(F_v), define by
f_v(g_v)=1/|τ(χ_v)|^2∑_α∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×∑_β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×χ_v(α)χ_v(β)f_v(g_α,β,v;ω_v),
where
τ(χ_v)=∑_α∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×ψ_v(αϖ_v^-n_v)χ_v(α)
is the Gauss sum relative to the additive character ψ_v, and
g_α,β,v:=[ 1 αϖ_v^-n_v; 1 ]g_v[ 1 βϖ_v^-n_v; 1 ].
Note that n_v=0 for almost all v∈Σ_F,. Hence, for all but finitely many v∈Σ_F,, the test function f_v(·)=f_v(·;ω_v) (cf. (<ref>)) supports in Z(F_v)\ K_v[m_v].
§.§.§ Construction of the Test Function
Let f=⊗_v∈Σ_Ff_v, where f_v is constructed in §<ref> and §<ref>. Note that f_∞ is determined by Π_∞.
§ THE REGULARIZED RELATIVE TRACE FORMULA
§.§ Fourier Expansion of the Kernel Function
Let f=⊗_vf_v be defined in §<ref>. Then f defines an integral operator
R(f)ϕ(g)=∫_G(𝔸_F)f(g')ϕ(gg')dg'
on the space L^2([G],ω) of functions on [G] which transform under Z(𝔸_F) by ω and are square integrable on [G]. This operator is represented by the kernel function
(g_1,g_2)=∑_γ∈G(F)f(g_1^-1γ g_2), g_1, g_2∈ G(𝔸_F).
It is well known that L^2([G],ω) decomposes into the direct sum of the space L_0^2([G],ω) of cusp forms and spaces L_^2([G],ω) and L_^2([G],ω) defined using Eisenstein series and residues of Eisenstein series respectively. Then
_0(g_1,g_2)+_(g_1,g_2)=(g_1,g_2)=∑_γ∈G(F)f(g_1^-1γ g_2),
where _0(g_1,g_2) (resp. _(g_1,g_2)) is the contribution from the cuspidal (resp. non-cuspidal) spectrum. Explicit expansions of _0(g_1,g_2) and _(g_1,g_2) will be given in §<ref>.
By Bruhat decomposition (g_1,g_2)=_(g_1,g_2)+_(g_1, g_2), where
_(g_1,g_2)=∑_γ∈ B_0(F)f(g_1^-1γ g_2), _(g_1,g_2)=∑_γ∈ B_0(F)wN(F)f(g_1^-1γ g_2).
Let 𝒦(·,·)∈{(·,·), _0(·,·), _(·,·), _(·,·),_(·,·)}. Define
ℱ_0ℱ_1𝒦(g_1,g_2):= ∫_[N]𝒦(g_1,u_2g_2)du_2, ℱ_1ℱ_0𝒦(g_1,g_2):=∫_[N]𝒦(u_1g_1,g_2)du_1,
ℱ_1ℱ_1𝒦(g_1,g_2):= ∫_[N]∫_[N]𝒦(u_1g_1,u_2g_2)du_2du_1,
ℱ_2ℱ_2(g_1,g_2):= ∑_α∈ A(F)∑_β∈ A(F)∫_[N]∫_[N](u_1α g_1,u_2β g_2)θ(u_1)θ(u_2)du_2du_1.
Using Poisson summation twice the integral ℱ_2ℱ_2𝒦(g_1,g_2) is equal to
𝒦(g_1,g_2)-ℱ_0ℱ_1𝒦(g_1,g_2)-ℱ_1ℱ_0𝒦(g_1,g_2)+ℱ_1ℱ_1𝒦(g_1,g_2).
By <cit.> we have, for x, y∈ A(𝔸_F), that
ℱ_0ℱ_1_(x,y)=ℱ_1ℱ_0_(x,y)=ℱ_1ℱ_1_(x,y)≡ 0.
Along with (<ref>) we then obtain that
ℱ_2ℱ_2_(x,y)=_(x,y).
Note that (<ref>) only holds over (x,y)∈ A(𝔸_F)× A(𝔸_F).
§.§ The Relative Trace Formula
§.§.§ The Spectral Side
Let (s_1)≫ 1 and (s_2)≫ 1. Define
J_^(f,s,χ):=J_0^(f,s,χ)+J_^(f,s,χ),
the spectral side, where s=(s_1, s_2)∈ℂ^2, and
J_0^(f,s,χ):= ∫_[A]∫_[A]_0(x,y)| x|^s_1| y|^s_2χ(x)χ(y)d^×xd^×y,
J_^(f,s,χ):= ∫_[A]∫_[A]ℱ_2ℱ_2_(x,y)| x|^s_1| y|^s_2χ(x)χ(y)d^×xd^×y.
By Proposition 6.4 in <cit.> (cf. §6.2), the integral J_^(f,s,χ) converges absolutely in (s_1), (s_2)≫ 1. In addition, J_^(f,s,χ) admits a holomorphic continuation J_^,(f,s,χ) to 𝐬∈ℂ^2. We will see in §<ref> that J_^,(f,s,χ) is roughly an average of L(1/2+s_1,π×χ)L(1/2+s_2,π×χ) as π varies over families of unitary automorphic representations of GL(2)/F.
§.§.§ The Geometric Side
By (<ref>) and the decomposition (x,y)=_(x,y)+_(x,y), the geometric side is
J_^(f,s,χ):=J^_,(f,s,χ)+J^_,(f,s,χ),
where (s_1)≫ 1, (s_2)≫ 1, and
J^_,(f,s,χ):= ∫_[A]∫_[A]ℱ_2ℱ_2_(x,y)| x|^s_1| y|^s_2χ(x)χ(y)d^×xd^×y
,
J^_,(f,s,χ):= ∫_[A]∫_[A]_(x,y)| x|^s_1| y|^s_2χ(x)χ(y)d^×xd^×y.
As in <cit.>, we have
J^_,(f,s,χ)=J^_,(f,s,χ)+J^,2_,(f,s,χ),
where J_,^(f,s,χ) is defined by
∫_𝔸_F^×∫_𝔸_F^×f([ 1; x 1 ][ y; 1 ])|x|^s_1+s_2|y|^s_2χ(y)d^×yd^×x,
and the regular orbital J^,2_,(f,s,χ) is defined by
∑_t∈ F-{0,1}∫_𝔸_F^×∫_𝔸_F^×f([ y x^-1t; xy 1 ])|x|^s_1+s_2|y|^s_2χ(y)d^×yd^×x.
Note that (<ref>) converges absolutely in (s_1+s_2)>1, and by <cit.> the integral J^,2_,(f,s,χ) converges absolutely in 𝐬∈ℂ^2, and in particular, the sum over t∈ F-{0,1} is finite, which is called stability of the regular orbital integrals (cf. <cit.>, <cit.>, <cit.>). Therefore, the geometric side J_^(f,s,χ) admits a holomorphic continuation J_^,(f,s,χ) to 𝐬∈ℂ^2. We shall investigate it in §<ref>-§<ref>.
§.§.§ The Regularized Relative Trace Formula
Note that _0(x,y)=ℱ_2ℱ_2_0(x,y), _0(x,y)+_(x,y)=(x,y)=_(x,y)+_(x,y). Then by (<ref>),
_0(x,y)+ℱ_2ℱ_2_(x,y)=ℱ_2ℱ_2(x,y)=ℱ_2ℱ_2_(x,y)+_(x,y).
As a consequence, when (s_1)≫ 1 and (s_2)≫ 1,
J_^(f,s,χ)=J_^(f,s,χ).
By applying the singularity matching process described in <cit.>, the equality (<ref>) extends to its holomorphic continuation, leading to the following equality between two holomorphic functions:
Let notation be as before. Then
J_^,(f,s,χ)=J_^,(f,s,χ)<ref>.
In this paper, our focus is on evaluating the above regularized RTF at 𝐬 = 0 = (0,0). Write 𝐬'=(s,0). Define the following normalized integrals
J^_,(f,χ):= [J^_,(f,s',χ)-s^-1s=0 J^_,(f,s,χ)]_s=0,
J^_,(f,χ):= [J^_,(f,s',χ)-s^-1s=0 J^_,(f,s,χ)]_s=0.
Notice that s=0 J^_,(f,s,χ)+s=0 J^_,(f,s,χ)≡ 0. Therefore,
J_^,(f,0,χ)=J^_,(f,χ)+J^_,(f,χ)+J^,2_,(f,0,χ)<ref>.
Let notation be as before. Then
J_^,(f,0,χ)=J^_,(f,χ)+J^_,(f,χ)+J^,2_,(f,0,χ).
§ THE SPECTRAL SIDE: MEROMORPHIC CONTINUATION AND BOUNDS
In this section we shall show that J_^,(f,s,χ) admits a holomorphic continuation to 𝐬∈ℂ^2. Moreover, we derive a lower bound of it as follows.
[]thmthmf
Let notation be as in §<ref>. Then
J_^,(f,0,χ)≫ T^-1/2-ε(MQ)^-ε∑_π∈𝒜_0(Π_∞,𝔐;χ_∞,ω)|L(1/2,π×χ)|^2
+T^-1/2-ε(MQ)^-ε∑_η∫_ℝ|L(1/2+it,ηχ)L(1/2+it,ωηχ)|^2/|L(1+2it,ωη^2)|^2dt,
where η∈𝒳_0(Π_∞,𝔐;χ_∞,ω), and the implied constant depends only on F, ε, c_v and C_v at v|∞.
§.§ Spectral Side: Meromorphic Continuation
§.§.§ Spectral Expansion of the kernel functions
Let notation be as in §<ref>. Let f=⊗_vf_v be defined in §<ref>. Let _0(x,y) and _(x,y) be defined by (<ref>) in §<ref>. Then by the spectral decomposition we have (e.g., cf. <cit.>)
_0(x,y)=∑_σ∈𝒜_0([G],ω)∑_ϕ∈𝔅_σσ(f)ϕ(x)ϕ(y),
_(x,y)=1/4π∑_η∈F^×\𝔸_F^(1)∫_iℝ∑_ϕ∈𝔅_σ_0,ηE(x,ℐ(λ,f)ϕ,λ)E(y,ϕ,λ)dλ.
Here, 𝔅_σ denotes an orthonormal basis of the cuspidal representation σ, and σ_0,η is given by σ_0,η=(η,η^-1ω).
§.§.§ Rankin-Selberg Periods
Let θ=⊗_vθ_v be the generic induced by the fixed additive character ψ (cf. §<ref>). For a generic automorphic form φ on G(𝔸_F), define the associated Whittaker function by
W_φ(g):=∫_[N]φ(ug)θ(u)du, g∈ G(𝔸_F),
Using the multiplicity one property, we can express W_φ(g) as a product over all places v∈Σ_F as W_φ(g)=∏_v∈Σ_FW_φ,v(g_v), where g=⊗_vg_v∈ G(𝔸_F). The local Whittaker function W_φ,v is spherical for all but finitely many places v∈Σ_F. Define
Ψ(s,φ,χ):=∫_𝔸_F^×W_φ([ x; 1 ])|x|^sχ(x)d^×x=∏_v∈Σ_FΨ_v(s,φ,χ),
where the local integral is defined by
Ψ_v(s,φ,χ)=∫_F_v^×W_φ,v([ x_v; 1 ])|x_v|_v^sχ_v(x_v)d^×x_v.
The integral Ψ(s,φ,χ) converges absolutely in (s)>1. Furthermore, it is related to L-functions as follows.
* If φ∈𝔅_σ, where σ∈𝒜_0([G],ω), then Ψ(s,φ,χ) converges absolutely for all s∈ℂ, making it an entire function. By Hecke's theory, Ψ(s,φ,χ) serves as the integral representation for the complete L-function Λ(s+1/2,σ).
* If φ∈𝔅_0,η associated with some η∈F^×\𝔸_F^(1), then as established in <cit.>, the function Ψ(s,φ,χ) converges absolutely in the region (s)_1≫ 1 and (s_2)≫ 1, and it has a meromorphic continuation to s∈ℂ, representing the complete L-function Λ(s+1/2,η)Λ(s+1/2,η^-1ω).
Let v∈Σ_F be a place. Let (s)>1. We denote by
R_v,λ(s,ϕ,χ):=Ψ_v(s,ϕ,χ)L_v(s+1/2,σ_v×χ_v)^-1, if ϕ∈𝔅_σ,σ∈𝒜_0([G],ω),
Ψ_v(s,ϕ,χ)/L_v(s+1/2,η_vχ_v)L_v(s+1/2,η_v^-1χ_vω_v), if ϕ∈𝔅_0,η,η∈F^×\𝔸_F^(1).
Let R_λ(s,ϕ,χ)=∏_v∈Σ_FR_v,λ(s,ϕ,χ). Then R_λ(s,ϕ,χ) turns out to be an entire function of s∈ℂ. Denote by
R_,λ(s,ϕ,χ)=∏_v∈Σ_F,R_v,λ(s,ϕ,χ), Ψ_∞(s,ϕ,χ):=∏_v∈Σ_F,∞Ψ_v(s,ϕ,χ).
§.§.§ Meromorphic Continuation
According to the construction of the test function f, the Eisenstein series E(x,ℐ(λ,f)ϕ,λ), ϕ∈𝔅_0,η, vanishes unless ϕ is right invariant under K_v[m_v], where m_v=e_v(𝔐), cf. §<ref>.
Substituting the Rankin-Selberg periods (cf. §<ref>) into the decomposition (<ref>) we then obtain J_^(f,s,χ)=J_0^(f,s,χ)+J_^(f,s,χ), where
J_0^(f,s,χ)= ∑_σ∈𝒜_0(Π_∞,𝔐;χ_∞,ω)∑_ϕ∈𝔅_σΨ(s_1,σ(f)ϕ)Ψ(s_2,ϕ,χ),
J_^(f,s,χ)= 1/4π∑_η∫_iℝ∑_ϕ∈𝔅_σ_0,ηΨ(s_1,σ_0,η(f)E(·,ϕ,λ),χ)Ψ(s_2,E(·, ϕ,λ),χ)dλ,
where η ranges through ∈F^×\𝔸_F^(1), (s_1)≫ 1 and (s_2)≫ 1.
The function J_0^(f,s,χ) continues to a holomorphic function J_0^,(f,s,χ) in ℂ^2. It is proved in <cit.> that J_^(f,s,χ) extends to a holomorphic function J_^,(f,s,χ) in -1/4<(s_1), (s_2)<1/4 with
J_^,(f,s,χ)=1/4π∑_η∫_iℝ∑_ϕ∈𝔅_σ_0,ηΨ(s_1,σ_0,η(f)E(·,ϕ,λ),χ)Ψ(s_2,E(·, ϕ,λ),χ)dλ,
where η∈F^×\𝔸_F^(1), and the integrand Ψ(s_1,σ_0,η(f)E(·,ϕ,λ),χ)Ψ(s_2,E(·, ϕ,λ),χ) is identified with its meromorphic continuation. In particular, J_^(f,s,χ) is holomorphic in the region -1/4<(s_1), (s_2)<1/4.
§.§ Spectral Side: the Second Moment
Let notation be as in §<ref>. Denote by f^(g)=⊗_v|∞f_v(g_v,χ_v)⊗⊗_v∈Σ_F, f_v(g_v;ω_v), where f_v(·;ω_v) is defined by (<ref>), i.e., f_v(z_vk_v;ω_v)=(K_v[m_v])^-1ω_v(z_v)^-1ω_v(E_2,2(k_v))^-1. Define
φ^(x):=∫_G(𝔸)f^(g)∏_v|𝔔[1/τ(χ_v)∑_βχ_v(β)σ_v([ 1 βϖ_v^-n_v; 1 ])]σ(g)φ(x)dg,
where β_v ranges over ∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×.
To simplify notations, we shall still write Ψ(s,ϕ^,χ) and Ψ(s,E(·,ϕ,λ)^,χ) for their holomorphic continuations, respectively. It follows from the construction of f that J_0^,(f,0,χ) and J_^,(f,0,χ) can be written as follows.
Let notation be as before. Then
J_0^,(f,0,χ)= ∑_σ∈𝒜_0([G],ω)∑_ϕ∈𝔅_σ|Ψ(0,ϕ^,χ)|^2,
J_^,(f,0,χ)= 1/4π∑_η∈F^×\𝔸_F^(1)∫_iℝ∑_ϕ∈𝔅_σ_0,η|Ψ(0,E(·,ϕ,λ)^,χ)|^2dλ.
§.§.§ Local calculations
The non-archimedean calculation presented in <cit.> is as follows:
Let notation be as before. Let σ∈𝒜_0([G],ω). Let ϕ∈σ be a pure tensor. Suppose that ϕ^≠ 0. Then for v∈Σ_F,, we have
Ψ_v(s,ϕ^,χ)=W_ϕ,v(I_2)L_v(s+1/2,σ_v×χ_v), (s)≥ 0.
Let notation be as before. Let ϕ∈σ_λ,η be a pure tensor. Let φ=E(·,ϕ,λ). Suppose that φ^≠ 0. Then for v∈Σ_F,, (s)≥ 0, we have
Ψ_v(s,φ^,χ)=W_φ,v(I_2)L_v(s+1/2+λ,η_vχ_v)L_v(s+1/2-λ,η_v^-1χ_vω_v).
§.§ Spectral Side: the lower bound
In this section we prove Theorem <ref>.
Denote by f_v^∘:=∫_Z(F_v)f̃_v(zg)ω_v(z)d^×z. Let π=π_∞⊗π_ be a unitary automorphic representation of GL(2)/F with π_∞⊗χ_∞≃Π_∞. Let v|∞, by the properties of f_v (cf. e.g., <cit.>),
T_v^-1/4-ε≪_ε∫_F_v^×(π_v(f_v^∘)(W_v⊗χ_v))([ x_v; 1 ])d^×x_v≪_ε T_v^-1/4+ε
for some W_v in the Kirillov model of π_v. By definition (<ref>) in §<ref>, we have
π_v(f_v(·,χ_v))W_v([ x_v; 1 ])χ_v(x_v)=(π_v(f_v^∘)(W_v⊗χ_v))([ x_v; 1 ]).
Hence, Ψ_v(s_0,π_v(f_v(·,χ_v))W_v,χ_v)≫_ε T_v^-1/4-ε for some W_v in the Kirillov model of π_v. Let ϕ∈π be a cusp form with Petersson norm ⟨ϕ,ϕ⟩=1, and Whittaker function W_ϕ=⊗_vW_ϕ,v (defined by (<ref>)), such that W_ϕ,v=W_v, for all v|∞, and W_ϕ,v is ∏_v<∞K_v[n_v]-invariant. Then
Ψ_v(0,ϕ^,χ)=Ψ_v(0,π_v(f_v(·,χ_v))W_v,χ_v)≫_ε T_v^-1/4-ε,
where the implied constant depends on ε, c_v and C_v at v|∞.
Together with Lemmas <ref>, <ref>, and the bound |W_(I_2)|≫ (TM)^-ε (cf. <cit.>),
J_0^,(f,0,χ)≫ T^-1/2-ε(MQ)^-ε∑_π∈𝒜_0(Π_∞,𝔐;χ_∞,ω)|L(1/2,π×χ)|^2,
and J_^,(f,0,χ) is
≫ T^-1/2-ε(MQ)^-ε∑_η∫_t∈ℝ|L(1/2+it,ηχ)L(1/2+it,ωηχ)|^2/|L(1+2it,ωη^2)|^2dt,
where η∈𝒳_0(Π_∞,𝔐;χ_∞,ω). Therefore, Theorem <ref> follows.
§ THE GEOMETRIC SIDE: THE ORBITAL INTEGRAL J^_,(F,Χ)
Let f=⊗_vf_v be text function constructed in §<ref>. Let 𝐬=(s,0) with s∈ℂ. Recall the definition in §<ref>:
J^_,(f,χ):=[J^_,(f,s,χ)-s=0 J^_,(f,s,χ)]_s=0,
where, for (s)>1, the small cell orbital integral is defined by (cf. §<ref>):
J^_,(f,s,χ):=∫_𝔸_F^×∫_𝔸_F^×f([ y b; 1 ])ψ(xb)|x|^1+sχ(y)d^×xd^×y,
which is a Tate integral representing Λ(1+s,1_F).
Let notation be as before. Then
J^_,(f,χ)≪_ε M^1+εT^1/2+ε,
where the implied constant depends only on F, ε, and c_v, C_v at v|∞, cf. §<ref>.
§.§ Local calculations at nonarchimedean places
Let (s)>0. Let
J_,v(s):=∫_F_v^×|x_v|_v^1+2s∫_F_v^×∫_F_vf_v([ y_v b_v; 1 ])ψ_v(x_vb_v)χ_v(y_v)db_vd^×y_vd^×x_v.
Take 𝐬=(s,0). By definition, we have
J^_,(f,s,χ):= ∏_v∈Σ_FJ_,v(s), (s)>0.
By <cit.> we have, for v∈Σ_F,, that
J_,v(s)=
N_F_v(𝔇_F_v)^-1/2(K_v[m_v])^-11_(𝔇_F_v^-1)^×(x_v), if v|𝔔,
|x_v|_v^1+2s_0N_F_v(𝔇_F_v)^-1/2(K_v[m_v])^-11_𝔇_F_v^-1(x_v), if v∤𝔔,
where m_v=e_v(𝔐) is defined in §<ref>. Hence,
J^_,(f,s,χ)=V_F· N_F(𝔇_F)^1/2+sζ_F(1+2s)∏_v|∞J_,v(s),
where V_F:=∏_v<∞(K_v[m_v])^-1≍ |𝔐|^1+o(1).
§.§ Local estimates at archimedean places
Let notation be as before. Let ε>0 be a constant. Let 𝒞:={s∈ℂ: |s|=ε} be the circle of radius ε. Then for s∈𝒞, we have
∏_v|∞J_,v(s)≪ T^1/2+ε,
where the implied constant depends on F, ε, c_v and C_v, v|∞.
Let s∈𝒞. Denote by
ℐ_v(x_v,s):=|x_v|_v^1+2s∫_F_v^×∫_F_vf_v([ y_v b_v; 1 ])ψ_v(x_vb_v)χ_v(y_v)db_vd^×y_v.
By the construction of f we have ℐ_v(x_v,s)≠ 0 unless x_vT_v^-1-γ_v≪_ε T_v^-1/2+ε, where γ_v is determined by τ∈𝔤̂, cf. §<ref>. Moreover, by decaying of Fourier transform of f_v, f_v([ y_v b_v; 1 ])≪ T_v^-∞ if |b_v|_v≫ T_v^-1/2+ε. Together with (<ref>),
ℐ_v(x_v,s)≪ |x_v|_v^1+2ε1_x_vT_v^-1-γ_v≪_ε T_v^-1/2+ε· T_v^1+ε· T_v^-1/2+ε· T_v^-1/2+ε+O(T_v^-∞),
where the factor T_v^1+ε comes from the sup-norm estimate (cf. (<ref>)), the first factor T_v^-1/2+ε comes from the range of y_v according to the support of f_v (cf. (<ref>)), and the second T_v^-1/2+ε comes from the essential range of b_v, i.e., |b_v|_v≪ T_v^-1/2+ε. In particular, the implied constant in (<ref>) depends only on F_v, ε, and c_v, C_v at v|∞.
As a consequence, we have
J_,v(s)=∫_F_v^×ℐ_v(x_v,s)d^×x_v≪ T_v^ε∫_F_v|x_v|_v^2ε1_x_vT_v^-1-γ_v≪_ε T_v^-1/2+εdx_v+O(T_v^-∞),
which is ≪ T_v^1/2+ε. Then (<ref>) follows.
§.§ Proof of Proposition <ref>
Let ε>0 be a constant. Let 𝒞:={s∈ℂ: |s|=ε} be the circle of radius ε (cf. Lemma <ref>). By Cauchy formula,
J^_,(f,χ)=1/2π i∫_𝒞J^_,(f,s,χ)/sds
Substituting (<ref>) into the above integral,
J^_,(f,χ)=V_F/2π i∫_𝒞N_F(𝔇_F)^1/2+sζ_F(1+2s)∏_v|∞J_,v(s)/sds.
By Lemma <ref> we have
J^_,(f,χ)≪ V_F∫_𝒞N_F(𝔇_F)^1/2+ε T^1/2+ε/|ε|·max_s∈𝒞|ζ_F(1+2s)|ds≪ M^1+εT^1/2+ε.
Hence, Proposition <ref> follows.
§ THE GEOMETRIC SIDE: THE ORBITAL INTEGRAL J_,^(F,Χ)
Let f=⊗_vf_v be text function constructed in §<ref>. Let 𝐬=(s,0) with s∈ℂ. Recall the definition in §<ref>:
J^_,(f,χ):=[J^_,(f,s,χ)-s=0 J^_,(f,s,χ)]_s=0,
where, for (s)>0, the dual orbital integral is defined by (cf. §<ref>):
J_,^(f,s,χ):=∫_𝔸_F^×∫_𝔸_F^×f([ 1; x 1 ][ y; 1 ])|x|^sχ(y)d^×yd^×x,
which is a Tate integral representing Λ(2s,1_F). Let s<0. By Poisson summation (or equivalently, the functional equation),
J^_,(f,s,χ) becomes
∫_𝔸_F^×∫_𝔸_F^×∫_𝔸_Ff([ 1; b 1 ][ y; 1 ])ψ(bx)|x|^1-sχ(y)dbd^×yd^×x.
Let notation be as before. Then
J^_,(f,χ)≪ M^1+εT^1/2+ε,
where the implied constant depends only on F, ε, and c_v, C_v at v|∞, cf. §<ref>.
Write
J^_,(f,s,χ)=∏_v∈Σ_FJ_,v(s),
where (s)<0, and each J_,v(s) is defined by
∫_F_v^×∫_F_v^×∫_F_vf_v([ 1; b_v 1 ][ y_v; 1 ])ψ_v(b_vx_v)|x_v|_v^1-sχ_v(y_v)db_vd^×y_vd^×x_v.
Similar to (<ref>) and (<ref>) we have
J^_,(f,s,χ)=V_F· N_F^(𝔔)(𝔇_F)^1/2+sζ_F^(𝔔)(1+2s)∏_v|𝔔J_,v(s)∏_v|∞J_,v(s),
where V_F:=∏_v<∞(K_v[m_v])^-1≍ |𝔐|^1+o(1),
N_F^(𝔔)(𝔇_F)=∏_𝔭|𝔇_F
𝔭+𝔔=𝒪_FN_F(𝔭), ζ_F^(𝔔)(1+2s):=∏_𝔭 prime
𝔭+𝔔=𝒪_F1/1-N_F(𝔭)^-1-2s.
§.§ Local estimates at ramified places
At v|𝔔, by definition
f_v([ 1; b_v 1 ][ y_v; 1 ])=0
unless
z_v[ 1 αϖ_v^-n_v; 1 ][ 1; b_v 1 ][ y_v; 1 ][ 1 βϖ_v^-n_v; 1 ]∈ K_v[m_v]
for some α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×, and z_v∈ F_v^×, i.e.,
z_v[ y_v+α b_vy_vϖ_v^-n_v (y_v+α b_vy_vϖ_v^-n_v)βϖ_v^-n_v+αϖ_v^-n_v; b_vy_v 1+β b_vy_vϖ_v^-n_v ]∈ K_v[m_v].
Analyzing the (2,1)-th entry of the matrix on the LHS of (<ref>) yields that e_v(z_v)+e_v(b_v)+e_v(y_v)≥ m_v≥ n_v. Hence an investigation of the (1,1)-th and (2,2)-th entry leads to
e_v(y_v)+2e_v(z_v)=0
e_v(z_v)+e_v(y_v)≥ 0, e_v(z_v)≥ 0.
As a consequence, e_v(z_v)=0, i.e., z_v∈𝒪_v^×. So e_v(y_v)=0, e_v(b_v)≥ m_v.
Hence we have f_v([ 1; b_v 1 ][ y_v; 1 ])=1_𝒪_v^×(y_v)1_ϖ_v^m_v𝒪_v(b_v). After a change of variable (i.e., β↦ y_v^-1β),
𝒥_v(x_v)=|τ(χ_v)|^-2/(K_v[m_v])∑_α,β∫_ϖ_v^m_v𝒪_v1_K_v[m_v](X_v)ψ_v(b_vx_v)χ_v(α)χ_v(β)db_v,
where X_v denotes the matrix
[ 1+α b_vϖ_v^-n_v (1+α b_vϖ_v^-n_v)βϖ_v^-n_v+αϖ_v^-n_v; b_v 1+β b_vϖ_v^-n_v ].
Note that 1_K_v[m_v](X_v)≠ 0 unless (1+α b_vϖ_v^-n_v)β +α∈ϖ_v^n_v𝒪_v. Hence,
𝒥_v(x_v)=|τ(χ_v)|^-2χ_v(-1)/(K_v[m_v])∑_α∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×∫_ϖ_v^m_v𝒪_vψ_v(b_vx_v)χ_v(1+α b_vϖ_v^-n_v)db_v.
Write b_v=ϖ_v^mγ_v, γ_v∈𝒪_v^×. Changing the variable α↦γ_v^-1α,
𝒥_v(x_v)=|τ(χ_v)|^-2χ_v(-1)/(K_v[m_v])ζ_F_v(1)∑_m≥ m_vq_v^-mG(m)R(m,x_v).
where G is the character sum
G(m):=∑_α∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×χ_v(1+αϖ_v^m-n_v),
and R(m,x_v) is the Ramanujan sum
R(m,x_v):=∫_𝒪_v^×ψ_v(γ_vϖ_v^mx_v)d^×γ_v.
Applying the trivial bound G(m)≪ q_v^n_v, and R(m,x_v)=0 if m<-e_v(x_v)-1, R(m,x_v)≪ q_v^-1 if m=-e_v(x_v)-1, and R(m,x_v)=1 if m≥ -e_v(x_v), we then deduce that
J_,v(s)≪ (m_v+2n_v)(K_v[m_v])^-1.
§.§ Local estimates at archimedean places
Similar to Lemma <ref> we have
Let notation be as before. Let ε>0 be a constant. Let 𝒞:={s∈ℂ: |s|=ε} be the circle of radius ε. Then for s∈𝒞, we have
∏_v|∞J_,v(s)≪ T^1/2+ε,
where the implied constant depends on F, ε, c_v and C_v, v|∞.
§.§ Proof of Proposition <ref>
Let ε>0 be a constant. Let 𝒞:={s∈ℂ: |s|=ε} be the circle of radius ε (cf. Lemma <ref>). By Cauchy formula,
J^_,(f,χ)=1/2π i∫_𝒞J^_,(f,s,χ)/sds
Plugging the expression of J^_,(f,s,χ) into the above integral,
J^_,(f,χ)=V_F/2π i∫_𝒞N_F^(𝔔)(𝔇_F)^1/2+sζ_F^(𝔔)(1+2s)∏_v|𝔔J_,v(s)∏_v|∞J_,v(s)/sds.
By the estimate (<ref>) and Lemma <ref> we have
J^_,(f,χ)≪ V_F^1+ε∫_𝒞N_F(𝔇_F)^1/2+ε T^1/2+ε/|ε|·max_s∈𝒞|ζ_F^(𝔔)(1+2s)|ds≪ M^1+εT^1/2+ε.
Hence, Proposition <ref> follows.
§ THE GEOMETRIC SIDE: REGULAR ORBITAL INTEGRALS
Recall the definition (<ref>) in §<ref>:
J^,2_,(f,0,χ):=∑_t∈ F-{0,1}∏_v∈Σ_Fℰ_v(t),
where for v∈Σ_F,
ℰ_v(t):=∫_F_v^×∫_F_v^×f_v([ y_v x_v^-1t; x_vy_v 1 ])χ_v(y_v)d^×y_vd^×x_v.
By Theorem 5.6 in <cit.> (or <cit.>) the orbital integrals J^,2_,(f,0,χ) converges absolutely. We shall establish an upper bound for it as follows.
Let notation be as before. Then
J^,2_,(f,0,χ)≪ T^εM^εQ^1+ε·1_M≪ Q^2(M,Q),
where the implied constant depends on ε, F, c_v, and C_v, v|∞. Here T, M, and Q are defined in §<ref>. In particular, J^,2_,(f,0,χ)=0 if M is large enough.
The observation that J^,2_,(f,0,χ)=0 for large M aligns with the calculation in <cit.>, despite the distinct nature of the regular orbital integrals involved.
§.§ Local Estimates: unramified nonarchimedean places
The following straightforward calculation can be found in <cit.>.
Let v∈Σ_F, be such that v∤𝔔. Then
ℰ_v(t)≪(1-e_v(1-t))(1+e_v(t)-2e_v(1-t))/(K_v[m_v])1_e_v(t-1)≤ 0
e_v(t)-e_v(1-t)≥ m_v.
Moreover, ℰ_v(t)=1 if e_v(t)=e_v(1-t)=0, m_v=0, and v∤𝔇_F. In particular, ℰ_v(t)=1 for all but finitely many v's.
§.§ Local Estimates at Ramified Places Σ_^-
In this section, we consider the case where v∈Σ_^-, specifically v |𝔔 and m_v < n_v. The local integrals ℰ_v(t) demonstrate unique characteristics that distinguish them from those discussed in <cit.>. This distinction sets them apart from the analysis presented in the aforementioned work.
Let v∈Σ_^-. Then
ℰ_v(t)≪
q_v^m_v+k if e_v(1-t)=-2k for m_v-n_v≤ k≤ -1
(e_v(t)-e_v(1-t)+1)q_v^m_v if e_v(t)-e_v(1-t)≥ 0
(1-e_v(t))^2q_v^m_v if e_v(t)≤ -1
0 otherwise,
where the implied constant is absolute.
By definition, f_v([ y_v x_v^-1t; x_vy_v 1 ])=0 unless
ϖ_v^k[ 1 αϖ_v^-n_v; 1 ][ y_v x_v^-1t; x_vy_v 1 ][ 1 βϖ_v^-n_v; 1 ]∈ K_v[m_v]
for some k∈ℤ. Write x_v=ϖ_v^r_1γ_1, y_v=ϖ_v^r_2γ_2, where r_1, r_2∈ℤ and γ_1, γ_2∈𝒪_v^×. Then (<ref>) becomes
ϖ_v^k[ 1 γ_1αϖ_v^-n_v; 1 ][ ϖ_v^r_2 ϖ_v^-r_1t; ϖ_v^r_1+r_2 1 ][ 1 γ_1γ_2βϖ_v^-n_v; 1 ]∈ K_v[m_v].
Changing variables α↦γ_1^-1α, β↦γ_1^-1γ_2^-1β, the above constraint becomes
ϖ_v^kY_α,β,r_1,r_2,t∈ K_v[m_v]
for some k∈ℤ, where Y_α,β,r_1,r_2,t is defined by
[ ϖ_v^r_2+αϖ_v^r_1+r_2-n_v (ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v; ϖ_v^r_1+r_2 1+βϖ_v^r_1+r_2-n_v ].
By definition the local integral ℰ_v(t) becomes
1/|τ(χ_v)|^2∑_α,βχ(α)χ(β)∑_r_1, r_2∈ℤ f_v(Y_α,β,r_1,r_2,t;ω_v),
where f_v(·;ω_v) is defined by (<ref>) in §<ref>. Note that (<ref>) amounts to
2k+r_2+e_v(1-t)=0
k+r_1+r_2≥ m_v
ϖ_v^k(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)∈𝒪_v
ϖ_v^k(1+βϖ_v^r_1+r_2-n_v)∈𝒪_v
ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v.
We will categorize our discussion into three cases based on the value of k: the case where k≤ -1 will be addressed in §<ref> below, the case where k=0 will be addressed in §<ref> below, and the case where k≥ 1 will be addressed in §<ref> below. Proposition <ref> can then be readily derived from these discussions.
§.§.§ The case that k≤ -1
Suppose k≤ -1. Then (<ref>) simplifies to
2k+e_v(1-t)=0
m_v-n_v≤ k≤ -1
r_2=0, r_1=n_v
1+β∈ϖ_v^-k𝒪_v
1+α∈ϖ_v^-k𝒪_v
ϖ_v^k[(1+α )(1+β)ϖ_v^-n_v+ϖ_v^-n_v(t-1)]∈𝒪_v.
* Suppose that k=-n_v. Then m_v=0, e_v(1-t)=-2n_v, α=β=-1ϖ_v^n_v. So the contribution from this case is
1/|τ(χ_v)|^21_e_v(1-t)=-2n_v1_m_v=0=q_v^-n_v1_e_v(1-t)=-2n_v1_m_v=0.
* Suppose that k>-n_v. Write α=-1+ϖ_v^-kα', and β=-1+ϖ_v^-kβ', where α', β'ϖ_v^n_v+k. Then
ϖ_v^k[(1+α )(1+β)ϖ_v^-n_v+ϖ_v^-n_v(t-1)]∈𝒪_v
becomes
α'β'+(t-1)ϖ_v^2k∈ϖ_v^n_v+k𝒪_v.
So the contribution from this case is
|τ(χ_v)|^-2/(K_v[m_v])∑_max{m_v-n_v,1-n_v}≤ k≤ -1𝒮(k),
where
𝒮(k):=∑_α',β'ϖ_v^n_v+k
α'β'=-(t-1)ϖ_v^2kϖ_v^n_v+kχ(1-ϖ_v^-kα')χ(1-ϖ_v^-kβ')ω_v(ϖ_v^-kβ').
Employing the trivial bound to 𝒮(k), we see that the corresponding contribution to ℰ_v(t) in this case (i.e., k>-n_v) is
≪(K_v[m_v])^-1∑_max{m_v-n_v,1-n_v}≤ k≤ -1q_v^k·1_e_v(1-t)=-2k.
Therefore, the contribution to ℰ_v(t) in the case that k≤ -1 is
≪q_v^k/(K_v[m_v])∑_m_v-n_v≤ k≤ -11_e_v(1-t)=-2k≪ q_v^m_v+k∑_m_v-n_v≤ k≤ -11_e_v(1-t)=-2k.
§.§.§ The case that k=0
Suppose that k=0 in (<ref>), which implies that
r_2+e_v(1-t)=0
r_1+r_2≥ m_v
min{r_2, r_1+r_2-n_v}≥ 0
(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_v.
Then r_1+r_2≥max{n_v,m_v}=n_v, e_v(t)-r_1≥ -n_v, and e_v(1-t)=-r_2≤ 0. So 0≤ r_2=-e_v(t-1), and n_v+e_v(1-t)≤ r_1≤ e_v(t)+n_v. Therefore, the contribution to ℰ_v(t) from this case is
1_e_v(t)-e_v(1-t)≥ 0/|τ(χ_v)|^2(K_v[m_v])∑_n_v+e_v(1-t)≤ r_1≤ e_v(t)+n_v𝒥_1(r_1,t),
where 𝒥_1(r_1,t) is defined by
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^r_2+αϖ_v^r_1-e_v(t-1)-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_vχ(α)χ(β)ω_v(1+βϖ_v^r_1-e_v(t-1)-n_v).
* Suppose r_2≥ 1. Then e_v(1-t)≤ -1, implying that e_v(t)=e_v(1-t)=-r_2≤ -1. Hence, -r_1+e_v(t)=-r_1-r_2≤ -n_v (from the third constraint in (<ref>)). Along with the last condition in (<ref>) we have -r_1+e_v(t)≥ -n_v. So -r_1+e_v(t)=-n_v, i.e., r_1+r_2=n_v. Consequently,
𝒥_1(r_1,t)=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^-e_v(t)+α )β +ϖ_v^n_v-r_1t+α≡ 0ϖ_v^n_vχ(α)χ(β)ω_v(1+β).
Write t=ϖ_v^e_v(t)γ under the embedding F^×↪ F_v^×, where γ∈𝒪_v^×. Then
𝒥_1(r_1,t)=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^-e_v(t)+α )(1+β)≡ -γ+ϖ_v^-e_v(t)ϖ_v^n_vχ(α)χ(β)ω_v(1+β),
which, after a change of variables, is equal to
𝒥_1(r_1,t)=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
αβ≡ -γ+ϖ_v^-e_v(t)ϖ_v^n_vχ(α-ϖ_v^-e_v(t))χ(β-1)ω_v(β).
Since -γ+ϖ_v^-e_v(t)∈𝒪_v^×, by the trivial bound, we have |𝒥_1(r_1,t)|≤ q_v^n_v.
* Suppose r_2=0. Then e_v(1-t)=0. Therefore, 𝒥_1(r_1,t) is equal to
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(1+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_vχ(α)χ(β)ω_v(1+βϖ_v^r_1-n_v).
Changing variable α↦α, the sum 𝒥_1(r_1,t) becomes
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(α+ϖ_v^r_1-n_v)(β +ϖ_v^n_v-r_1t) ≡ t-1ϖ_v^n_vχ(α)χ(β)ω_v(1+βϖ_v^r_1-n_v).
Changing variables α↦α-ϖ_v^r_1-n_v and β↦β-ϖ_v^n_v-r_1t, 𝒥_1(r_1,t) can be rewritten as
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
αβ≡ t-1ϖ_v^n_vχ(α-ϖ_v^r_1-n_v)χ(β-ϖ_v^n_v-r_1t)ω_v(1-t+βϖ_v^r_1-n_v).
Since e_v(t-1)=0, then β is uniquely determined by α. Hence the trivial bound yields |𝒥_1(r_1,t)|≤ q_v^n_v.
Consequently, substituting the above discussions into (<ref>) we then see that the contribution from this case is
≪1_e_v(t)-e_v(1-t)≥ 0/|τ(χ_v)|^2(K_v[m_v])∑_r_1q_v^m_v≪ (e_v(t)-e_v(1-t)+1)q_v^m_v1_e_v(t)-e_v(1-t)≥ m_v-n_v,
where n_v+e_v(1-t)≤ r_1≤ e_v(t)+n_v.
§.§.§ The case that k≥ 1
Suppose that k≥ 1 in (<ref>), which implies that
2k+r_2+e_v(1-t)=0
k+r_1+r_2≥max{n_v, m_v}=n_v
k+r_2≥ 0
ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v.
From the last constraint we conclude that k-r_1+e_v(t)≥ -n_v. Hence
e_v(1-t)=e_v(t)≤ -k≤ -1
e_v(t)≤ r_2≤ -e_v(t)-2
n_v+e_v(t)+1≤ r_1≤ n_v
k≥ n_v-r_1-r_2≥ 1
[(ϖ_v^k+r_2+α )βϖ_v^-n_v+ϖ_v^k-r_1t+αϖ_v^k-n_v]∈𝒪_v.
Therefore, the contribution to ℰ_v(t) from this case is
1_e_v(t)≤ -1/|τ(χ_v)|^2(K_v[m_v])∑_r_1∑_e_v(t)≤ r_2≤ -e_v(t)-2∑_1≤ k≤ -e_v(t)𝒥_2(r_1,r_2,k,t),
where n_v+e_v(t)+1≤ r_1≤ n_v, and 𝒥_2(r_1,r_2,k,t) is defined by
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^k+r_2+α)(β +ϖ_v^k)≡ϖ_v^2k+r_2-tϖ_v^n_v+k-r_1ϖ_v^n_vχ(α)χ(β)
ω_v(1+βϖ_v^r_1+r_2-n_v).
Note that k≥ 1, β +ϖ_v^k∈(𝒪_v/ϖ_v^n_v𝒪_v)^×. So α is uniquely determined by β. Therefore, by the trivial bound, |𝒥_2(r_1,r_2,k,t)|≤ q_v^n_v. Along with (<ref>), the contribution to ℰ_v(t) from this case is
∑_n_v+e_v(t)+1≤ r_1≤ n_v∑_e_v(t)≤ r_2≤ -e_v(t)-2q_v^m_v1_e_v(t)≤ -1≪ (1-e_v(t))^2q_v^m_v1_e_v(t)≤ -1.
A more refined bound can be derived in the case where k≥ 0 by estimating the character sums nontrivially. However, it becomes apparent that the contribution from the k≥ 0 case is overshadowed by the contribution from k≤ -1. Therefore, there is no necessity to further reduce the error term.
§.§ Local Estimates at Ramified Places Σ_^+
Consider v∈Σ_^+, which means v|𝔔 and m_v≥ n_v, where m_v=e_v(𝔐) and n_v=r_χ_v (cf. §<ref>).
Let v∈Σ_^+. Then
ℰ_v(t)≪
(1-e_v(t))^2q_v^m_v if e_v(t)≤ -1,m_v=n_v,
(e_v(t)-e_v(1-t)+1+m_v-n_v)q_v^m_v if e_v(t)≥ m_v-n_v,
0 otherwise,
where the implied constant depends at most on F_v.
Consider the notation used in the proof of Proposition <ref> in §<ref> (or in <cit.>). Since m_v≥ n_v≥ 1, the constraints (<ref>) can be simplified as follows:
2k+r_2+e_v(1-t)=0
k+r_1+r_2≥ m_v
ϖ_v^k(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)∈𝒪_v^×
ϖ_v^k(1+βϖ_v^r_1+r_2-n_v)∈𝒪_v^×
ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v.
By considering the second and fourth constraints in (<ref>), we deduce that k≥ 0. We can now proceed to examine the following two cases.
§.§.§ The case that k=0
Suppose that k=0 in (<ref>), which implies that
r_2+e_v(1-t)=0
r_1+r_2≥ m_v
min{r_2, r_1+r_2-n_v}=0
(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_v.
Then r_1+r_2≥max{n_v,m_v}=m_v, e_v(t)-r_1≥ -n_v, and e_v(1-t)=-r_2≤ 0. So 0≤ r_2=-e_v(t-1), and m_v+e_v(1-t)≤ r_1≤ e_v(t)+n_v. Therefore, the contribution to ℰ_v(t) from this case is
1_e_v(t)-e_v(1-t)≥ m_v-n_v/|τ(χ_v)|^2(K_v[m_v])∑_m_v+e_v(1-t)≤ r_1≤ e_v(t)+n_v𝒥_1(r_1,t),
where 𝒥_1(r_1,t) is defined by
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^r_2+αϖ_v^r_1-e_v(t-1)-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_vχ(α)χ(β)ω_v(1+βϖ_v^r_1-e_v(t-1)-n_v).
By the trivial bound (as in §<ref>) the sum in (<ref>) is
≪ (e_v(t)-e_v(1-t)+1+m_v-n_v)q_v^m_v1_e_v(t)-e_v(1-t)≥ m_v-n_v.
§.§.§ The case that k≥ 1
Suppose that k≥ 1 in (<ref>), which implies that
2k+r_2+e_v(1-t)=0
k+r_1+r_2≥ n_v
k+r_1+r_2-m_v=0
k+r_2≥ 0
ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v.
Since m_v≥ n_v, then by the second and the third constraints in (<ref>) we have m_v=n_v. From the last constraint we conclude that k-r_1+e_v(t)≥ -n_v. Hence
e_v(1-t)=e_v(t)≤ -k≤ -1, m_v=n_v
e_v(t)≤ r_2≤ -e_v(t)-2
n_v+e_v(t)+1≤ r_1≤ n_v
k= n_v-r_1-r_2≥ 1
[(ϖ_v^k+r_2+α )βϖ_v^-n_v+ϖ_v^k-r_1t+αϖ_v^k-n_v]∈𝒪_v.
As in §<ref>, the contribution to ℰ_v(t) from this case is
1_e_v(t)≤ -1·1_m_v=n_v/|τ(χ_v)|^2(K_v[m_v])∑_n_v+e_v(t)+1≤ r_1≤ n_v∑_e_v(t)≤ r_2≤ -e_v(t)-2𝒥_2(r_1,r_2,k,t),
where 𝒥_2(r_1,r_2,k,t) is defined by
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^k+r_2+α)(β +ϖ_v^k)≡ϖ_v^2k+r_2-tϖ_v^n_v+k-r_1ϖ_v^n_vχ(α)χ(β)
ω_v(1+βϖ_v^r_1+r_2-n_v).
By trivial bound the contribution to ℰ_v(t) in this case is
≪ (1-e_v(t))^2q_v^m_v1_e_v(t)≤ -11_m_v=n_v.
Therefore, Proposition <ref> follows.
Let notation be as before. Then
𝒥_v^(2)(r_1,t)≪ n_vq_v^r_ω_v+n_v/2+n_v-r_1+e_v(t)/2,
where the implied constant is absolute.
Note that (1+αϖ_v^r_1-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_v amounts to
(α^-1+ϖ_v^r_1-n_v)β +α^-1ϖ_v^n_v-r_1t+1∈ϖ_v^n_v𝒪_v.
Changing the variable α↦α^-1, we have
𝒥_v^(2)(r_1,t)=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(α+ϖ_v^r_1-n_v)β +αϖ_v^n_v-r_1t+1∈ϖ_v^n_v𝒪_vχ_v(α)χ_v(β)ω_v(1+βϖ_v^r_1-n_v).
Changing variables α↦α-ϖ_v^r_1-n_v and β↦β-ϖ_v^n_v-r_1t, 𝒥_v^(2)(r_1,t) becomes
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
αβ≡ t-1ϖ_v^n_vχ_v(α-ϖ_v^r_1-n_v)χ_v(β-ϖ_v^n_v-r_1t)ω_v(1-t+βϖ_v^r_1-n_v).
Let h∈𝒪_v^×. Let 𝒥_v^(2)(r_1,t,ψ_v,h) be defined by
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
αβ≡ t-1ϖ_v^n_vχ_v(α-ϖ_v^r_1-n_v)χ_v(β-ϖ_v^n_v-r_1t)ψ_v(hβ q_v^-r_ω_v).
Here we recall that ψ_v is a fixed unramified additive chatacter of F_v. By definition, we have 𝒥_v^(2)(r_1,t,ψ_v,h)=0 if r_ω_v>r_1.
Notice that χ is primitive. By Theorem 2G of <cit.> (cf. p.45) or Deligne's quasi-orthogonality of trace functions (cf. <cit.>) and Lemmas 12.2 and 12.3 in <cit.>, following the proof of Proposition 2 in <cit.>, we have
𝒥_v^(2)(r_1,t,ψ_v,h) ≪ n_v(q_v^n_v-r_1+e_v(t),q_v^r_1-n_v,q_v^n_v)^1/2q_v^n_v/2·1_r_ω_v≤ r_1,
where the implied constant is absolute. In particular, (<ref>) yields that
𝒥_v^(2)(r_1,t,ψ_v,h) ≪ n_vq_v^n_v-r_1+e_v(t)/2· q_v^n_v/2·1_r_ω_v≤ r_1.
Since ω_v is primitive, we have the Gauss sum expansion
ω_v(γ)=1/τ(ω_v)∑_h∈ (𝒪_v/ω_v^r_ω_v𝒪_v)^×ω_v(h)ψ_v(hγ q_v^-r_ω_v),
where q_v^r_ω_v is the conductor of ω_v.
Hence, (<ref>) follows from (<ref>), (<ref>), triangle inequality, and the fact that |τ(ω_v)|=q_v^r_ω_v/2.
* Suppose that k+r_1+r_2=n_v. Then (<ref>) amounts to
2k+r_2+e_v(1-t)=0
k+r_1+r_2=n_v≥ m_v
k+r_2≥ 0
ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v.
As a consequence, (<ref>) yields
e_v(1-t)=e_v(t)≤ -k≤ -1
e_v(t)≤ r_2≤ -e_v(t)-2
n_v+e_v(t)+1≤ r_1≤ n_v
k=n_v-r_1-r_2≥ 1
[(ϖ_v^k+r_2+α )βϖ_v^-n_v+ϖ_v^k-r_1t+αϖ_v^k-n_v]∈𝒪_v.
From the last constraint we conclude that k-r_1+e_v(t)=-n_v. Hence
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^k+r_2+α)(β +ϖ_v^k)≡ϖ_v^2k+r_2-tϖ_v^-e_v(t)ϖ_v^n_vχ(α)χ(β)
ω_v(1+βϖ_v^-k)
Note that 2r_2+k=m_2-r_1+k=-e_v(t)≥ 1. Then γ:=ϖ_v^2k+r_2-tϖ_v^-e_v(t)∈𝒪_v^×. After a change of variables, we obtain
𝒥_v^(3)(r_1,r_2,t)=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
αβ≡γϖ_v^n_vχ(α-ϖ_v^k+r_2)χ(β-ϖ_v^k)ω_v(βϖ_v^-k).
By <cit.> and the fact that k≤ -e_v(t),
𝒥_v^(3)(r_1,r_2,t)≪ n_vq_v^n_v+r_ω_v/2· q_v^min{k,k+r_2,n_v}/21_r_ω_v≤ n_v-k≤ n_vq_v^n_v+r_ω_v-e_v(t)/21_r_ω_v≤ n_v.
2k+r_2+e_v(1-t)=0
k+r_1+r_2≥ m_v
ϖ_v^k(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)∈𝒪_v
ϖ_v^k(1+βϖ_v^r_1+r_2-n_v)∈𝒪_v
ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v.
* Suppose that k+r_1+r_2≥ n_v+1. Then m_v=0, which forces that r_ω_v=0. In this case (<ref>) amounts to
2k+r_2+e_v(1-t)=0
k+r_1+r_2≥ n_v
k+r_1+r_2≥ m_χ_v
k+r_2≥ 0
ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v^×.
e_v(1-t)=e_v(t)≤ -k≤ -1
e_v(t)≤ r_2≤ -e_v(t)-2
n_v+e_v(t)+1≤ r_1≤ n_v
k+r_1+r_2≥ n_v+1
[(ϖ_v^k+r_2+α )βϖ_v^-n_v+ϖ_v^k-r_1t+αϖ_v^k-n_v]∈𝒪_v.
Since n_v=m_v, then r_ω_v=0, i.e., ω_v is trivial. Hence the contribution from this case to ℰ_v(t) is
ℰ^(3)_v(t):=∑_r_1=n_v+e_v(t)+1^n_v∑_r_2=e_v(t)^-e_v(t)-2q_v^-2r_1s_0-r_2s_01_r_1+r_2≤ n_v-1/|τ(χ_v)|^2(K_v[n_v])·𝒥_v^(3)(r_1,r_2,t),
where we set k=n_v-r_1-r_2 and
𝒥_v^(3)(r_1,r_2,t):=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^k+r_2+α)(β +ϖ_v^k)+tϖ_v^-e_v(t)-ϖ_v^2k+r_2∈ϖ_v^n_v𝒪_vχ(α)χ(β).
Therefore, we have
ℰ^(3)_v(t)≪ n_v(1-e_v(t))^2q_v^-(2n_v+2s_0+3e_v(t))s_0·1_e_v(t)≤ -1· q_v^n_v-e_v(t)/2,
where the implied constant is absolute.
Then Proposition <ref> follows from (<ref>), (<ref>) and (<ref>).
§.§ Local Estimates: archimedean
Let v|∞. Define by
ℰ_v^†:=∫_F_v^×∫_F_vmax_t∈ F-{0,1}|f_v([ y_v x_v^-1t; x_vy_v 1 ])|dx_vd^×y_v.
By <cit.> we have the following estimate.
Let notation be as before. Let v|∞. Then
ℰ_v^†≪ T_v^ε,
where the implied constant depends on ε, F, c_v, and C_v defined in §<ref>.
§.§ Bounding Regular Orbital Integrals: Proof of Theorem <ref>
§.§.§ The support of the rationals t∈ F-{0,1}
Let notation be as before. Suppose t∈ F-{0,1}. Let f be the test function defined in §<ref>. Let
𝔛(𝔔,f):={ξ∈ F^×∩∏_v∈Σ_^-𝔭_v^-2(n_v-m_v)∏_v∤𝔔𝔭_v^m_v𝒪_F: |ξ|_v≪ 1, v|∞},
where the implied constant depends only on f_∞. Then the integral ∏_v∈Σ_Fℰ_v(t) converges absolutely and it vanishes unless t/t-1∈𝔛(𝔔,f).
Recall the definition (<ref>): for v∈Σ_F,
ℰ_v(t):=∫_F_v^×∫_F_v^×f_v([ y_v x_v^-1t; x_vy_v 1 ])χ_v(y_v)d^×y_vd^×x_v.
By Lemma <ref> the integral ℰ_v(t)=1 for all but finitely many v's. It then follows from Propositions <ref> and <ref>, and Lemma <ref> that ∏_v∈Σ_Fℰ_v(t) converges absolutely and it is vanishing unless
e_v(t)-e_v(t-1)≥ m_v, if v∤𝔔,
e_v(t)-e_v(t-1)≥ 0, if v∈Σ_^+.
e_v(t)-e_v(t-1)≥ -2(n_v-m_v), if v∈Σ_^-.
Since t/(t-1)∈ F-{0,1}, then (<ref>) follows from (<ref>).
§.§.§ Estimate of nonarchimedean integrals
Fix an ideal ℜ⊂𝒪_F with the property that e_v(ℜ)=m_v for v∤𝔔, and e_v(ℜ)=0 for all v<∞ and v|𝔔.
Fix an ideal 𝔑⊂𝒪_F with the property that e_v(𝔑)=n_v-m_v for v∈Σ_^-, and e_v(ℜ)=0 for all v<∞ and v∉Σ_^-.
For t∈ F-{0,1} with t/(t-1)∈𝔛(𝔔,f) (cf. (<ref>)), we may write
t/(t-1)=u, u∈ℜ𝔑^-2𝒪_F.
Then 1/(t-1)=u-1.
Let notation be as above. Let ℰ_v(t) be defined by (<ref>). Set ℰ_(t):=∏_v<∞|ℰ_v(t)|. Let t/(t-1)=u∈ℜ𝔑^-2𝒪_F be as in (<ref>). Then ℰ_(t) is
≪ (MQN_F(u(u-1)))^εM∏_v∈Σ_F,
v∤𝔔1_e_v(u)≥ m_v∏_v∈Σ_^-𝒥_v^-(u)∏_v∈Σ_^+𝒥_v^+(u),
where M=N_F(𝔐) (cf. (<ref>)), and
𝒥_v^-(u):= 1_e_v(u)≥ 0+∑_m_v-n_v≤ k≤ -1q_v^k1_e_v(u-1)=2k,
𝒥_v^+(u):= 1_e_v(u-1)≥ 11_m_v=n_v+1_e_v(u)≥ m_v-n_v.
Here the implied constant in (<ref>) depends on F and ε.
By Lemma <ref> we have ℰ_v(t)=1 if e_v(t)=e_v(1-t)=0, m_v=0, n_v=0, and v∤𝔇_F. There are finitely many remaining places
v∈𝒱:={v∈Σ_F,:v|𝔐𝔑 or e_v(t)≠ 0 or e_v(t-1)≠ 0}.
Let us denote the expression α as follows:
∏_v∈𝒱∩Σ_^+n_v^2(|e_v(t)-e_v(t-1)|+1)^2∏_v∈𝒱-Σ_^+(1+|e_v(t)|+2|e_v(t-1)|)^2,
where the terms in the product dominate coefficients in Lemma <ref>, Propositions <ref> and <ref>. Using (<ref>), we observe that e_v(u)≥ -e_v(𝔔). Consequently, we have the estimate:
α≪ (MQ)^2ε· (N_F(u)N_F(u-1))^ε,
where the implied constants depends on ε. As a consequence, (<ref>) follows from Lemma <ref>, Propositions <ref> and <ref>.
For x_∞=⊗_v|∞x_v∈ F_∞. For t∈𝔛(𝔔,f), parametrize t/(t-1) via (<ref>). Let
𝒞(x_∞):=∑_t∈ F-{0,1}, t/t-1=u∈𝔛(𝔔,f)
|t/t-1|_v≪ |x_v|_v, v|∞ℰ_(t).
Let notation be as before. Let x_∞∈ F_∞^×. Let 𝒞(x_∞) be defined by (<ref>). Then
𝒞(x_∞)≪_ε,F(MQ(1+|x_∞|_∞))^ε· |x_∞|_∞· Q·1_M≪ Q^2(M,Q) |x_∞|_∞,
where the implied constant depends on ε and F.
Note that u∈ℜ𝔑^-2𝒪_F-{0,1} and N_F(u)≪ |x_∞|_∞. Hence, N_F(ℜ𝔑^-2)≪ |x_∞|_∞, i.e., M/(M,Q)≪ Q^2|x_∞|_∞. By Lemma <ref>, we have
𝒞(x_∞)≪ (MQ(1+|x_∞|_∞))^ε𝒮(x_∞)·∏_v∈Σ_F,q_v^m_v.
where the auxiliary sum 𝒮(x_∞) is defined by
𝒮(x_∞):=∑_u∈ℜ𝔑^-2𝒪_F∩ F^×
|u|_v≪ |x_v|_v, v|∞
e_v(u)≥ m_v, v<∞, v∤𝔔𝒮^+(u)𝒮^-(u).
Here the integral ideals ℜ and 𝔑 are defined in §<ref>, and
𝒮^+(u):= ∏_v∈Σ_^+[1_e_v(u-1)≥ 1·1_m_v=n_v+1_e_v(u)≥ m_v-n_v],
𝒮^-(u):= ∏_v∈Σ_^-[1_e_v(u)≥ 0+∑_m_v-n_v≤ k≤ -1q_v^k1_e_v(u-1)=2k].
We proceed to deal with 𝒮(x_∞). Let 𝔔^+:=∏_v∈Σ_^+𝔭_v and 𝔔^-:=∏_v∈Σ_^-𝔭_v. Expanding the products 𝒮^+(u)𝒮^-(u) we obtain
𝒮(x_∞)=∑_𝔞_1𝔞_2=𝔔^+
m_v=n_v, ∀ v|𝔞_1∑_𝔟_3𝔟_4=𝔔^-𝒮(𝔞_1,𝔟_3),
where
𝒮(𝔞_1,𝔟_3):=∑_u∈ℜ𝔑^-2𝒪_F∩ F^×
|u|_v≪ |x_v|_v, v|∞
e_v(u)≥ m_v, v<∞, v∤𝔔
e_v_1(u-1)≥ 1, v_1|𝔞_1
e_v_2(u)≥ m_v_2-n_v_2, v_2|𝔞_2
e_v_3(u)≥ 0, v_3|𝔟_3
∏_v_4|𝔟_4∑_m_v_4-n_v_4≤ k≤ -1q_v_4^k1_e_v_4(u-1)=2k.
Write 𝔟_4=𝔔^-𝔟_3^-1=∏_v∈𝒱_4𝔭_v, where 𝒱_4={v_1',⋯,v_l'} is a subset of Σ_^-. Denote by k=(k_1,⋯,k_l)∈ℤ^l. Then
∏_v_4|𝔟_4∑_m_v_4-n_v_4≤ k≤ -1q_v_4^k1_e_v_4(u-1)=2k=∑_k=(k_1,⋯,k_l)
m_v_i'-n_v_i'≤ k_i≤ -1, 1≤ i≤ l∏_j=1^lq_v_j'^k_j1_e_v_j'(u-1)=2k_j.
Therefore,
𝒮(x_∞)=∑_𝔞_1𝔞_2=𝔔^+
m_v=n_v, ∀ v|𝔞_1∑_𝔟_3𝔟_4=𝔔^-∑_k=(k_1,⋯,k_l)
m_v_i'-n_v_i'≤ k_i≤ -1, 1≤ i≤ l∏_j=1^lq_v_j'^k_j·𝒮^†(x_∞),
where
𝒮^†(x_∞):=∑_u∈ℜ𝔑^-2𝒪_F∩ F^×
|u|_v≪ |x_v|_v, v|∞
e_v(u)≥ m_v, v<∞, v∤𝔔
e_v_1(u-1)≥ 1, v_1|𝔞_1
e_v_2(u)≥ m_v_2-n_v_2, v_2|𝔞_2
e_v_3(u)≥ 0, v_3|𝔟_3
e_v_j'(u-1)=2k_j, 1≤ j≤ l
1.
By counting rational lattice points in a bounded region, we have
𝒮^†(x_∞)≪∑_u∈ℜ𝔑^-2𝒪_F∩ F^×
|u|_v≪ |x_v|_v, v|∞
e_v(u)≥ m_v, v<∞, v∤𝔔
e_v_2(u)≥ m_v_2-n_v_2, v_2|𝔞_2
e_v_3(u)≥ 0, v_3|𝔟_3
e_v_j'(u)=2k_j
1≪ |x_∞|_∞∏_v∈Σ_F,
v∤𝔔q_v^-m_v∏_v_2|𝔞_2q_v_2^n_v_2-m_v_2∏_j=1^lq_v_j'^-2k_j.
Therefore, 𝒮(x_∞) is majorized by
|x_∞|_∞∏_v∈Σ_F,
v∤𝔔1/q_v^m_v∑_𝔞_1𝔞_2=𝔔^+
m_v=n_v, ∀ v|𝔞_1∏_v_2|𝔞_2q_v_2^n_v_2-m_v_2∑_𝔟_3𝔟_4=𝔔^-∑_k=(k_1,⋯,k_l)
m_v_i'-n_v_i'≤ k_i≤ -1, 1≤ i≤ l∏_j=1^l1/q_v_j'^k_j.
Notice that
∑_𝔞_1𝔞_2=𝔔^+
m_v=n_v, ∀ v|𝔞_1∏_v_2|𝔞_2q_v_2^n_v_2-m_v_2=∑_v|𝔔^+q_v_2^n_v_2-m_v_2∑_𝔞_1𝔞_2=𝔔^+
m_v=n_v, ∀ v|𝔞_11≪ Q^ε∑_v|𝔔^+q_v_2^n_v_2-m_v_2,
and
∑_𝔟_3𝔟_4=𝔔^-∑_k=(k_1,⋯,k_l)
m_v_i'-n_v_i'≤ k_i≤ -1, 1≤ i≤ l∏_j=1^lq_v_j'^-k_j≪∏_v|𝔔^-q_v^n_v-m_v∑_𝔟_3𝔟_4=𝔔^-∑_k=(k_1,⋯,k_l)
m_v_i'-n_v_i'≤ k_i≤ -1, 1≤ i≤ l1,
which is ≪ Q^ε∏_v|𝔔^-q_v^n_v-m_v. Therefore,
𝒮(x_∞)≪ |x_∞|_∞Q^ε∏_v∈Σ_F,
v∤𝔔q_v^-m_v∏_v|𝔔q_v^n_v-m_v.
Then (<ref>) follows from substituting (<ref>) into (<ref>).
§.§.§ Proof of Theorem <ref>
Recall the definition (<ref>) in §<ref>:
J^,2_,(f,0,χ)=∑_t∈ F-{0,1}∫_𝔸_F^×∫_𝔸_F^×f([ y x^-1t; xy 1 ])χ(y)d^×yd^×x.
So the regular orbital integrals J^,2_,(f,0,χ) is
≪∫_F_∞^×∫_F_∞^×∑_t∈ F-{0,1}
t/t-1∈𝔛(𝔔,f)ℰ_(t)|f_∞([ y_∞ x_∞^-1t; x_∞y_∞ 1 ])|d^×y_∞d^×x_∞,
where ℰ_(t):=∏_v<∞|ℰ_v(t)|.
By the support of f_∞ (cf. (<ref>) in §<ref>), we have
f_∞([ y_∞ x_∞^-1t; x_∞y_∞ 1 ])=0
unless y_∞≍ 1, |x_v|_v≪ 1, and |t/t-1|_v≪ |x_v|_v, for all v|∞. Write
t/(t-1)=u𝔑^-2ℜ
with u∈𝒪_F as in (<ref>). Then
J^,2_,(f,0,χ) is
≪∫_F_∞^×∫_1+o(1)1_|x_v|_v≪ 1
v|∞·𝒞(x_∞)·max_t∈𝔛(𝔔,f)|f_∞([ y_∞ x_∞^-1t; x_∞y_∞ 1 ])|d^×y_∞d^×x_∞,
where 𝒞(x_∞) is defined by (<ref>). Note that |x_v|_v≪ 1 for v|∞, yielding that |x_∞|_∞≪ 1. Hence, we may replace 1_M≪ Q^2(M,Q) |x_∞|_∞ with 1_M≪ Q^2(M,Q) in Lemma <ref>. As a consequence, we have
J^,2_,(f,s_0,χ)≪_ε(MQ)^ε· Q·1_M≪ Q^2(M,Q)·∏_v|∞ℰ_v^†,
where ℰ_v^† is defined by (<ref>). By Lemma <ref>, the above bound becomes
J^,2_,(f,s_0,χ)≪ T^εM^εQ^1+ε·1_M≪ Q^2(M,Q),
where the implied constant depends on ε, F, c_v, and C_v, v|∞.
§ PROOF OF MAIN RESULTS
Recall the intrinsic data in §<ref>. Let F be a number field. Let χ=⊗_vχ_v be a primitive unitary Hecke character of F^×\𝔸_F^×.
§.§ The Spectral Side
Recall the lower bound of J_^,(f,0,χ) in §<ref>.
*
§.§ The Geometric Side
Recall the geometric side (<ref>) in §<ref>:
J_^,(f,0,χ)=J^_,(f,χ)+J^_,(f,χ)+J^,2_,(f,0,χ).
Let notation be as before. Then
J_^,(f,0,χ)≪ T^1/2+εM^1+ε+T^εM^εQ^1+ε·1_M≪ Q^2(M,Q),
where the implied constant depends on ε, F, c_v, and C_v, v|∞ (cf. §<ref>).
By Propositions <ref> and <ref>, we have
J^_,(f,χ)+J^_,(f,χ)≪_ε M^1+εT^1/2+ε,
where the implied constant depends only on F, ε, and c_v, C_v at v|∞, cf. §<ref>. Moreover, by Theorem <ref> we have
J^,2_,(f,0,χ)≪ T^εM^εQ^1+ε·1_M≪ Q^2(M,Q).
The estimate (<ref>) follows from the above inequalities.
§.§ Put It All Together: Proof of Main Results
Substituting Theorem <ref> and Proposition <ref> into the regularized relative trace formula J_^,(f,0,χ)=J_^,(f,0,χ) (cf. Corollary <ref> in §<ref>), we obtain the following.
Let the notation be as before. Denote by 𝒜_0(Π_∞,𝔐;χ_∞,ω) the set of cuspidal representations and 𝒳_0(Π_∞,𝔐;χ_∞,ω) the set of Hecke characters, as defined in §<ref>. Then
∑_π|L(1/2,π×χ)|^2≪ T^1+εM^1+εQ^ε+T^1/2+εM^εQ^1+ε·1_M≪ Q^2(M,Q),
where π∈𝒜_0(Π_∞,𝔐;χ_∞,ω), and
∑_η∫_ℝ|L(1/2+it,ηχ)L(1/2+it,ωηχ)|^2/|L(1+2it,ωη^2)|^2dt ≪ T^1+εM^1+εQ^ε
+T^1/2+εM^εQ^1+ε·1_M≪ Q^2(M,Q),
where η∈𝒳_0(Π_∞,𝔐;χ_∞,ω), and the implied constant depends only on F, ε, and c_v, C_v at v|∞, cf. §<ref>.
Let notation be as before. Then
∑_π∈𝒜_0(M;χ_∞)
σ_π(𝔭)≥σ|L(1/2,π×χ)|^2≪ T^1-σ/2+ε· M^1-2σ+ε· Q^1-σ+ε.
If C_(χ)>1, then Theorem <ref> follows from (<ref>). In the case where C_(χ)=1, we replace χ with χχ_0, where χ_0 is a fixed Hecke character induced from a Dirichlet character with a fixed modulus, such as 3. Similarly, we replace π with π⊗χ_0. By applying Theorem <ref> to π⊗χ_0 and χχ_0, we obtain the same bound (with a different implied constant dependent on the modulus of χ_0) for the second term L(1/2,π×χ). Consequently, Theorem <ref> follows.
Let π=η⊞η. Then ω=η^2. By <cit.> there exists t_0∈ [2^-1exp(-3√(log C(ηχ))), exp(-3√(log C(ηχ)))] (which might depend on the character ηχ) such that
|L(1/2,ηχ)|≪exp(log^3/4C(ηχ))|L(1/2+it_0,ηχ)|,
where the implied constant depends only on F. Here C(χη)≪ M^1/2Q is the analytic conductor of ηχ. By <cit.> we have
∫_ℝ|L(1/2+it,ηχ)L(1/2+it,ωηχ)|^2/|L(1+2it,ωη^2)|^2dt≫|L(1/2+it_0,ηχ)L(1/2+it_0,ωηχ)|^2/C(ηχ)^ε.
Since ω=η^2, then |L(1/2+it_0,ωηχ)|=|L(1/2+it_0,ηχ)|. So it follows from Theorem <ref> that, for χ∈𝒳_0(Π_∞,𝔐;χ_∞,ω),
|L(1/2+it_0,ηχ)|^4≪ T^1+εM^1+εQ^ε+T^1/2+εM^εQ^1+ε·1_M≪ Q^2(M,Q).
Suppose η is primitive. Then C_(η)=M^1/2. It then follows from (<ref>) and (<ref>) that
L(1/2,ηχ)≪_η_∞,χ_∞ C_(η)^1/2+ε+C_(χ)^1/4+ε.
By symmetry we also have
L(1/2,ηχ)≪_η_∞,χ_∞ C_(η)^1/4+ε+C_(χ)^1/2+ε.
Hence the estimate (<ref>) holds.
§.§ Proof of Corollary <ref>
Let f∈ℱ_2k^new(N). By Hecke's theorem there exists a primitive quadratic character χ of conductor q ≪ kN^1+ε such that L(1/2,f×χ)≠ 0. Here the implied constant is absolute.
Let k∈{2,3,4,5,7}. Denote by 𝒩:=#{g∈ℱ_2k^(N): L(1/2,f×χ)L(1/2,g×χ)≠ 0}. Recall that (e.g., cf. <cit.>)
∑_g∈ℱ_2k^(N)L(1/2,g×χ)≫ N^1-ε.
Then by Cauchy-Schwarz inequality and Corollary <ref> we obtain
N^1-ε≪𝒩^1/2·[∑_g∈ℱ_2k^(N)|L(1/2,g×χ)|^2]^1/2≪𝒩^1/2· N^1/2+ε,
leading to (<ref>). Here the implied constant depends only on ε.
In the above proof, a crucial new ingredient is our Corollary <ref>, which effectively replaces the third moment estimate employed in <cit.>:
∑_g∈ℱ_2k^(N)L(1/2,g×χ)^3≪_k,ε(Nq)^1+ε.
It is worth noting that Corollary <ref>, given by
∑_g∈ℱ_k^(N)|L(1/2,g×χ)|^2≪ (kNq)^ε(kN+k^1/2q·1_N≪ q^2(N,q)),
provides the average Lindelöf estimate in the N-aspect when q≪_k N^1+ε. However, (<ref>) does not yield this bound when q is large.
§ HYBRID SUBCONVEXITY: PROOF OF THEOREM <REF>
In this section, we will establish the validity of Theorem <ref> by presenting a proof that draws upon similar techniques to those used in the proof of Theorem <ref> (cf. §<ref>). However, instead of relying on Theorem <ref> in §<ref>, we will utilize the relative trace formula (i.e. Theorem <ref>) from §<ref>. Notably, the proof is simplified by not requiring amplification, although the overall methodology remains similar.
§.§ Notation
Recall the data in Theorem <ref>: we let
* χ=⊗_vχ_v be a Hecke character of 𝔸_F^×/F^×, and Q:=C_(χ) ;
* 𝔐 be an integral ideal of norm |𝔐|:=N_F(𝔐);
* 𝒜_0^χ_∞(T;𝔐) be the set of cuspidal automorphic representations π=⊗_vπ_v of PGL(2)/F such that π_=⊗_v<∞π_v has arithmetic conductor dividing 𝔐, and π_v⊗χ_v has uniform parameter growth of size (T_v;c_v,C_v), for all v|∞, cf. §<ref>, where T=∏_v|∞ T_v.
Note that Weyl law yields #𝒜_0^χ_∞(T;𝔐)=(T|𝔐|)^1+o(1).
§.§.§ Choice of Test Functions
Despite potential ambiguity, we will continue to use the notation f=⊗_vf_v to refer to the test function, which is defined as follows.
* Let f_∞ be defined as in §<ref>.
* For v∈Σ_F,, let m_v'=e_v(𝔐), and n_v=n_v, the local exponent of χ_v (cf. §<ref>). Define a function on G(F_v), supported on Z(F_v)\ K_v[m_v'], by
f_v(z_vk_v;1)=(K_v[m_v'])^-1,
where K_v[m_v'] is the image of K_v[m_v'] in G(F_v). For g_v∈ G(F_v), define by
f_v(g_v)=1/|τ(χ_v)|^2∑_α∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×∑_β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×χ_v(α)χ_v(β)f_v(g_α,β,v;1),
where
τ(χ_v)=∑_α∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×ψ_v(αϖ_v^-n_v)χ_v(α)
is the Gauss sum relative to the additive character ψ_v, and
g_α,β,v:=[ 1 αϖ_v^-n_v; 1 ]g_v[ 1 βϖ_v^-n_v; 1 ].
Note that n_v=0 for almost all v∈Σ_F,. Hence, for all but finitely many v∈Σ_F,, the test function f_v(·)=f_v(·;1) (cf. (<ref>)) supports in Z(F_v)\ K_v[m_v].
* Take f=⊗_v≤∞f_v as our test function into Theorem <ref>:
J_^,(f,0,χ)=J_^,(f,0,χ),
where 0=(s_0,s_0), with s_0:=2^-1exp(-2√(log C(π×χ))) (cf. §<ref>).
§.§ The Spectral Side
Similar to Theorem <ref>, we have
Let notation be as before. Then
𝒥_^(α,χ)≫_εT^-1/2-ε(|𝔐|Q)^-ε∑_π∈𝒜_0^χ_∞(T;𝔐)|L(1/2,π×χ)|^2,
where the implied constant depends only on F, ε, and c_v, C_v at v|∞, cf. §<ref>.
§.§ The Geometric Side
In this section we handle the geometric side
J_^,(f,0,χ)= J^_,(f,0,χ)+J^,+_,(f,0,χ)+J^,∧_,(f,0,χ)
+J^,_,(f,0,χ)+J^,2_,(f,0,χ).
§.§.§ Bounds of Irregular Orbital Integrals
The estimates from §<ref> and §<ref> remain valid with T≍ T, 𝒩_f replaced by 1, and [M,M'Q] replaced with |𝔐|. Specifically, Propositions <ref> and <ref>, and Lemmas <ref>, <ref> and <ref> become:
Let notation be as before. Then
J^_,(f,0,χ)≪ |𝔐|^1+εT^1/2+ε,
J^,∧_,(f,0,χ)≪ s_0^-1|𝔐|^1+εT^1/2+ε,
J^,+_,(f,0,χ)+J^,,1_,(f,0,χ)≪ T^ε|𝔐|^ε,
J^,,2_,(f,0,χ)≪ s_0^-1|𝔐|^1+εT^1/4+ε,
where the implied constant depends only on F, ε, and c_v, C_v at v|∞, cf. §<ref>.
§.§.§ Bounds of Regular Orbital Integrals
We need to do
Theorem <ref> in §<ref>.
Let notation be as before. Then
J^,2_,(f,0,χ)≪ T^ε|𝔐|^εQ^1+ε,
where the implied constant depends on ε, F, c_v, and C_v, v|∞.
Proposition <ref> in §<ref>.
Let v|𝔔. Then
ℰ_v(t)≪
n_v q_v^n_v/2-e_v(t)s_0 if e_v(t)≤ -1,
κ_vq_v^m_v'+e_v(t)/2 if e_v(t)≥ m_v'-n_v,e_v(t-1)=0,
0 otherwise,
where κ_v=n_v(e_v(t)+n_v-m_v'+1), and the implied constant is absolute.
Following the proof of Proposition <ref>, the local integral ℰ_v(t) becomes
1/|τ(χ_v)|^2∑_α,βχ(α)χ(β)∑_r_1, r_2∈ℤ q_v^-2r_1s_0-r_2s_01_Y_α,β,r_1,r_2,t∈ Z(F_v)K_v[m_v]f_v(Y_α,β,r_1,r_2,t;1),
where f_v(·;1) is defined by (<ref>), and Y_α,β,r_1,r_2,t is defined by
[ ϖ_v^r_2+αϖ_v^r_1+r_2-n_v (ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v; ϖ_v^r_1+r_2 1+βϖ_v^r_1+r_2-n_v ].
Note that 1_Y_α,β,r_1,r_2,t∈ Z(F_v)K_v[m_v]≠ 0 unless
ϖ_v^kY_α,β,r_1,r_2,t∈ K_v[m_v']
for some k∈ℤ. Similar to (<ref>), the constraint (<ref>) amounts to
2k+r_2+e_v(1-t)=0
k+r_1+r_2≥ m_v'
ϖ_v^k(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)∈𝒪_v^×
ϖ_v^k(1+βϖ_v^r_1+r_2-n_v)∈𝒪_v^×
ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v.
Then the estimate of ℰ_v(t) boils down to Proposition <ref> (with m_v replaced by m_v') if m_v'≥ n_v.
Now we assume that m_v'<n_v.
* Suppose k≥ 1. Then it follows from ϖ_v^k(1+βϖ_v^r_1+r_2-n_v)∈𝒪_v^× that k+r_1+r_2=n_v, which yields that r_1=n_v+k+e_v(1-t). From ϖ_v^k(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)∈𝒪_v^× we have r_1+k≥ 0, which, in conjunction with the first constraint, leads to k+r_2≥ 0. Therefore,
1≤ k≤ -e_v(t-1)
r_1=n_v+k+e_v(1-t)
r_2=-2k-e_v(1-t).
* Suppose k≤ -1. Then it follows from ϖ_v^k(1+βϖ_v^r_1+r_2-n_v)∈𝒪_v^× that r_1+r_2=n_v, which contradicts k+r_1+r_2≥ m_v'. Hence, we must have k≤ 0.
r_1=n_v, r_2=0,
m_v'-n_v≤ k≤ -1,
e_v(1-t)=-2k,
α≡ -1ϖ_v^-k
β≡ -1ϖ_v^-k
Moreover, β is uniquely determined by αϖ_v^n_v. So
* Suppose k=0 in (<ref>), which implies that
r_2+e_v(1-t)=0
r_1+r_2≥ m_v'
min{r_2, r_1+r_2-n_v}=0
(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_v.
* Suppose r_2≥ 1. Then r_1+r_2=n_v. Since e_v(t)-r_1≥ -n_v, then e_v(t)+r_2≥ 0. So e_v(t)-e_v(1-t)≥ 0. In this case we have r_2=-e_v(t). Therefore, the contribution to ℰ_v(t) from this case is
ℰ^(1)_v(t):=q_v^-2n_vs_0-e_v(t)s_01_e_v(t)≤ -1/|τ(χ_v)|^2(K_v[m_v'])∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^-e_v(t)+α)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_vχ(α)χ(β).
Write t=ϖ_v^e_v(t)γ under the embedding F^×↪ F_v^×, where γ∈𝒪_v^×. Then the last sum over α and β becomes
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^-e_v(t)+α )(β+1)≡ -γ+ϖ_v^-e_v(t)ϖ_v^n_vχ(α)χ(β),
which, after a change of variables, is equal to
𝒥_v^(1)(t):=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
αβ≡ -γ+ϖ_v^-e_v(t)ϖ_v^n_vχ(α-ϖ_v^-e_v(t))χ(β-1).
As a special case of Proposition 2 on p.71 of <cit.> we have 𝒥_v^(1)(t)≪ n_vq_v^n_v/2, where the implied constant is absolute. Hence,
ℰ^(1)_v(t)≪ n_vq_v^m_v'-n_v/2-2n_vs_0-e_v(t)s_01_m_v'≤ n_v1_e_v(t)≤ -1.
* Suppose r_2=0. Then e_v(1-t)=0, r_1≥ n_v, and e_v(t)≥ r_1-n_v.
The contribution to ℰ_v(t) from this case is
ℰ^(2)_v(t):=∑_r_1=n_v^e_v(t)+n_vq_v^-2r_1s_01_e_v(t)≥ r_1-n_v/|τ(χ_v)|^2(K_v[m_v'])·𝒥_v^(2)(r_1,t),
where
𝒥_v^(2)(r_1,t):=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(1+αϖ_v^r_1-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_vχ(α)χ(β).
By Lemma <ref> below (in conjunction with r_1≥ n_v) the sum ℰ^(2)_v(t) is
≪ n_v(e_v(t)+1)1_e_v(t)≥ e_v(1-t)=0/|τ(χ_v)|^2(K_v[m_v'])· q_v^n_v+e_v(t)/2.
Since |τ(χ_v)|^2(K_v[m_v])≫ q_v^n_v-m_v', then
ℰ^(2)_v(t)≪ n_v^2(e_v(t)+1)1_e_v(t)≥ e_v(1-t)=0q_v^m_v'-n_v/2+e_v(t)/2.
Then Proposition <ref> follows from (<ref>) and (<ref>).
alpha
|
http://arxiv.org/abs/2307.03983v1 | 20230708141424 | Hybrid Successive Interference Cancellation and Power Adaptation: a Win-Win Strategy for Robust Uplink NOMA Transmission | [
"Yanshi Sun",
"Wei Cao",
"Momiao Zhou",
"Zhiguo Ding"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
Hybrid Successive Interference Cancellation and Power Adaptation: a Win-Win Strategy for Robust Uplink NOMA Transmission
Yanshi Sun, Member, IEEE, Wei Cao, Momiao Zhou, Member, IEEE, Zhiguo Ding, Fellow, IEEE
Y. Sun, Wei Cao and M. Zhou are with the School of Computer Science and Information
Engineering, Hefei University of Technology, Hefei, 230009, China. (email: [email protected], [email protected] and [email protected]).
Z. Ding is with Department of Electrical Engineering and Computer
Science, Khalifa University, Abu Dhabi, UAE, and Department of Electrical
and Electronic Engineering, University of Manchester, Manchester, UK. (email: [email protected]).
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The aim of this paper is to reveal the importance of hybrid successive interference cancellation (SIC) and power adaptation (PA) for improving transmission robustness of uplink non-orthogonal multiple access (NOMA).
Particularly, a cognitive radio inspired uplink NOMA communication scenario is considered, where one primary user is allocated one dedicated resource block, while M secondary users compete with each other to be opportunistically served by using the same resource block of the primary user. Two novel schemes are proposed for the considered scenario, namely hybrid SIC with PA (HSIC-PA) scheme and fixed SIC with PA (FSIC-PA) scheme. Both schemes can ensure that the secondary users are served without degrading the transmission reliability of the primary user compared to conventional orthogonal multiple access (OMA) based schemes. Rigorous analytical results are presented to evaluate the performance of the proposed two schemes. It is shown that both schemes can avoid outage probability error floors without any constraints on users' target rates in the high SNR regime. Furthermore, it is shown that the diversity gain achieved by the HSIC-PA scheme is M, while that of the FISC-PA scheme is only 1. Numerical results are provided to verify the developed analytical results and also demonstrate the superior performance achieved by the proposed schemes by comparing with the existing HSIC without PA (HSIC-NPA) scheme.
The presented simulation results also show that HSIC-PA scheme performs the best among the three schemes, which indicates the importance of the combination of HSIC and PA for improving transmission robustness.
Non-orthogonal multiple access (NOMA), hybrid successive interference cancellation (HSIC), power adaptation, outage probability.
§ INTRODUCTION
Non-orthogonal multiple access (NOMA) has attracted extensive research interest during the past few years, and has been recognized as an important potential enabling technology for future wireless communication systems <cit.>. Compared to conventional orthogonal multiple access (OMA), where one channel resource block can be accessed by a single user only, the key appealing feature of NOMA is that allowing multiple users to simultaneously access the same channel resource block is encouraged <cit.>. Thus, by applying NOMA, larger connectivity and higher spectral efficiency can be obtained.
Existing research works show that NOMA can be compatible with many other advanced technologies, such as multiple input multiple output (MIMO) <cit.>, millimeter wave communications <cit.>, Terahertz communications <cit.>, reconfigurable intelligent surfaces (RIS) <cit.>, satellite communications <cit.> and so on.
Since NOMA allows multiple users to simultaneously occupy one channel resource block, how to address inter-user interference is one of key issues in NOMA communication systems. To this end, a widely used method in NOMA to address inter-user interference is successive interference cancellation (SIC), where users' signals are decoded in a successive manner <cit.>. Due to the error propagation nature of SIC, how to order users plays a very important role in the performance of SIC. Conventionally, there are two main types of methods for determining the decoding order of users in NOMA. One is known as the channel state information (CSI) based SIC method, where users are ordered according to the quality of their channels <cit.>. The other is known as the quality of service (QoS) based SIC method, where the signals for the users with more stringent QoS are decoded first, while other users are often opportunistically served and their signals are decoded later <cit.>. Note that, most existing works on NOMA carried out a prefixed SIC decoding order according to either the above two aforementioned criteria. Unfortunately, a very dispiriting phenomenon exists in the NOMA schemes based on the aforementioned CSI or QoS based methods. Specifically, the outage probability achieved by these schemes suffers from severe error floors, which means that the outage probability achieved by
a certain user doesn't approach zero as SNR goes infinity. Thus, the transmission reliability cannot be guaranteed, which significantly limits the application of NOMA in many practical scenarios.
It was thought that, such outage probability error floors are unavoidable in the implementation of NOMA, and swapping SIC decoding orders dynamically cannot yield a significant performance gain <cit.>.
Motivated by the error floor issue, a new design of SIC namely hybrid SIC (HSIC) was initially proposed for cognitive radio inspired uplink NOMA by <cit.>. In the proposed HSIC scheme, the decoding orders of users are dynamically determined according to the relationship between the instantaneous channel conditions and users' target rates. <cit.> show that the proposed HSIC scheme can avoid outage probability error floors, under some constraints on users' target rates. The most important contributions of the series studies in <cit.> are two folds.
First, <cit.> showed that it is possible to avoid outage error floors, at least under some specific conditions. Second, <cit.> indicated the importance of introducing HSIC to improve transmission robustness of NOMA.
However, as mentioned above, the proposed scheme in <cit.> can only avoid outage probability error floors under some stringent conditions on users' target rates, which may not be met in many realistic scenarios. Thus, it is natural to ask the following two questions.
The first question is whether it is possible to avoid outage probability error floors without any constraints on users' rates. And the second question is whether it is necessary to apply HSIC to avoid outage probability error floors.
This paper aims to answer the two aforementioned questions, and investigate the impact of the combination of HSIC and power adaptation (PA) on improving the transmission robustness in NOMA. Specifically, a cognitive radio inspired uplink NOMA scenario is considered. In the considered scenario, one primary user is allocated one dedicated channel resource block, while there are M secondary users who compete with each other to opportunistically share the primary user's resource block without degrading the outage performance of the primary user. Two new designs of NOMA schemes, namely HSIC with PA (HSIC-PA) and fixed SIC with PA (FSIC-PA) are proposed. Both schemes can avoid outage probability error floors without any constraints on users' target rates. The main contributions of this paper are listed as follows.
* Two novel designs of uplink NOMA schemes are proposed, namely HSIC-PA and FSIC-PA[Note that the
HSIC-PA scheme extends the scheme proposed in our previous work <cit.> where only two users are considered, while the FSIC-PA scheme hasn't been proposed according to our best knowledge.]. In the proposed HSIC-PA scheme, the decoding order of the secondary user can be dynamically adjusted according to the channel conditions. While in the proposed FSIC-PA scheme, the decoding order of the secondary user is fixed at the second stage of SIC. By rigorous derivation, the closed-form expressions for the outage probabilities achieved by the proposed two schemes are obtained.
* Based on the obtained expressions for the outage probabilities, asymptotic analysis in the high SNR regime is further developed to gain more insights into the proposed two schemes. It is shown that both HSIC-PA scheme and FSIC-PA scheme can avoid outage probability error floors without any constraints on users' target rates. The fact that the proposed FSIC-PA scheme can avoid error floors indicates that HSIC is not necessary to avoid error floors. Furthermore, the diversity gains achieved the proposed two schemes are also provided, respectively. Interestingly, the diversity gain achieved by HSIC-PA scheme is M, whereas that achieved by FSIC-PA scheme is only 1.
* Numerical results are presented to verify the accuracy of the developed analytical results and demonstrate the superior performance of the proposed HSIC-PA scheme and FSIC-PA scheme, by comparing with the benchmark scheme termed HSIC-NPA proposed in <cit.>. In terms of outage probability and ergodic rate, it is shown that FSIC-PA scheme performs better than HSIC-NPA scheme in the high SNR regime, but worse in the low SNR regime. Besides, HSIC-PA scheme performs the best among three schemes at all SNRs in terms of outage probability and ergodic rate, which shows the power of the combination of HSIC and PA in the design of uplink NOMA transmissions. In terms of power consumption, both the proposed HSIC-PA and FSIC-PA schemes consume less power than the existing HSIC-NPA scheme, whereas HSIC-PA scheme is more power-consuming than FSIC-PA scheme.
§ SYSTEM MODEL
Consider an uplink NOMA communication scenario with one base station (BS), one primary user U_0 and M
secondary users U_m, 1≤ m≤ M. Note that, in the considered scenario, ensuring the transmission reliability of U_0 is of the high priority, which has a target data rate denoted by R_0. In conventional OMA based schemes, the primary user is allocated with a dedicated resource block, which cannot be accessed by other users. While in the considered NOMA schemes of this paper, M secondary users compete with each other to opportunistically access the channel resource block which is allocated to the primary user. Note that allowing secondary users to share the channel resource block of the primary user must be done in such a way to ensure that the QoS of the primary user U_0 is not degraded.
The channel gain of the primary user U_0 is denoted by g, and the channel gains of the secondary users are denoted by h_m, 1≤ m≤ M. In this paper, g and h_m are modeled as the normalized Rayleigh fading gains, which means that g and h_m are independent and identically distributed (i.i.d) circular symmetric complex Gaussian (CSCG) random variables with zero mean and unit variance, i.e., g∼𝒞𝒩(0,1) and h_m ∼𝒞𝒩(0,1). The transmit power of the primary user U_0 is denoted by P_0. The transmit power of the secondary user U_m is denoted by β P_s, where
β∈ [0,1 ] is the adjustable power adaptation coefficient of U_m, and P_s is the maximum power of U_m. Without loss of generality, the background noise power is also assumed to be normalized throughout the paper.
In the remainder of the paper, the M secondary users are ordered according to their channel gains:
| h_1 | ^2< ⋯ < | h_M|^2.
In this paper, two novel NOMA schemes are proposed, namely HSIC-PA scheme and FSIC-PA scheme.
It will be shown that both schemes can avoid outage probability error floors.
For each scheme, in each period of transmission, only the secondary user which can achieve the largest instantaneous achievable rate is allowed to transmit signal by sharing the primary user's resource block.
The proposed two schemes are described in the following two subsections.
§.§ HSIC-PA Scheme
To begin with, define an interference threshold denoted by τ (g) as follows:
τ (g)= max{ 0,P_ 0 |g | ^2 /2^R_ 0 -1 -1}.
Note that τ(g) can be interpreted as the maximum interference, with which U_0 can
still achieve the same outage performance as in OMA where the resource block would be solely occupied by U_0. For more details on τ(g), please refer to <cit.>.
Define ϵ_0=2^R_0-1 and α _0=ϵ_0/P_0, we have
τ(g)=
|g|^2α_0^-1-1 , |g|^2>α_0,
0 , |g|^2<α_0.
For each secondary user U_m, its instantaneous achievable rate is determined by how its channel
gain compares to τ (g), which can be classified into the following two types:
* Type I: the maximum received signal power of U_m at the BS is less than or equal to τ(g), i.e.,
P_s | h_m |^2 ≤τ (g). For this case, putting U_m at the second stage of SIC
can yield a larger data rate compared to putting U_m at the first stage of SIC, and will not prevent the primary user from successfully decoding its signal. Thus, it is favorable to decode U_m's signal at the second stage of SIC, and the achievable rate of U_m is given by
R_1^m=log(1+P_ s | h_ m |^2),
which is the same as in HSIC-NPA scheme proposed in <cit.>.
* Type II: the maximum received signal power of U_m at the BS is larger than τ (g), i.e., P_s | h_m |^2 > τ (g). For this case, the benchmark scheme termed HSIC-NPA which is proposed in <cit.>
only considers the case where β is set to be 1. Thus, in order to avoid degrading the QoS of U_0, U_m's signal can only be decoded at the first stage of SIC in HSIC-NPA, yielding the following achievable data rate of U_m:
R_II,1^m=log(1+P_s | h_m |^2 /P_0 | g | ^2+1 ).
Note that the drawback of putting U_m at the first stage of SIC is that, when P_0|g|^2 is large, R_II,1^m might still be small even with a large P_s | h_m |^2.
To this end, the proposed HSIC-PA scheme offers an additional choice where β can be set to be less than 1 so that β P_s|h_m|^2=τ(g), which can provide an opportunity to yield a larger achievable rate. As a result, U_m's signal can be decoded at the second stage of SIC, yielding the following achievable data rate of U_m:
R_2,2^m=log(1+τ(g)).
Thus, in the proposed HSIC-PA scheme, when P_s | h_m |^2 > τ(g), the achievable data rate of U_m is given by:
R_2^m=max{R_2,1^m,R_2,2^m}.
According to the above discussions, the achievable data rate of U_m in HSIC-PA scheme can be concluded as:
R^m=
R_1^m, P_s | h_ m |^2 ≤τ(g)
R_2^m, P_s | h_ m |^2 >τ(g).
§.§ FSIC-PA Scheme
Another scheme termed FSIC-PA is proposed in this subsection.
Note that in HSIC-PA scheme, the secondary user's signal can be decoded either at the first or second stage of SIC. However, in FSIC-PA scheme, its signal can only be decoded at the second stage of SIC.
In FSIC-PA scheme, for each secondary user U_m, its instantaneous achievable rate can also be determined by considering the following two cases as in the previous subsection.
* Type I: the maximum received signal power of U_m at the BS is less than or equal to τ(g), i.e.,
P_s | h_m |^2 ≤τ (g). For this case, the decoding strategy is as same as in the HSIC-NPA and the proposed HSIC-PA scheme, where U_m is decoded at the second stage of SIC. Thus, the achievable data rate of U_m is R̂^m_I =log(1+P_s|h_m|^2), since the interference from U_0 can be removed by SIC.
* Type II: the maximum received signal power of U_m at the BS is larger than τ (g), i.e., P_s | h_m |^2 > τ (g). For this case, in the proposed FSIC-PA scheme, U_m can only be decoded at the second stage of SIC. To carry out this strategy, β is set to be less than 1 so that β P_s|h_m|^2=τ(g). Thus, the achievable data rate of U_m for type II is R̂_II^m=log(1+τ(g)).
By concluding the above two cases, the achievable data rate of U_m in the FSIC-PA scheme can be expressed as:
R̂^m=R̂^m_I, P_s | h_ m |^2 ≤τ(g)
R̂^m_II, P_s | h_ m |^2 >τ(g).
Note that, the proposed HSIC-PA and FSIC-PA schemes can ensure that the outage performance of the primary user is the same as that in the OMA scheme. Because the use of NOMA is transparent to the primary user, this paper focuses on the performance of the opportunistically served secondary users.
§ PERFORMANCE ANALYSIS ON HSIC-PA SCHEME AND FSIC-PA SCHEME
In this section, the closed-form expressions for the outage probabilities of the served secondary user achieved by the proposed two schemes will be provided. Furthermore, asymptotic analysis for the outage probabilities will be presented, which shows that both HSIC-PA and FSIC-PA schemes can avoid outage probability error floors without any constraints on users' target rates. Besides, rigorous comparisons between the proposed HSIC-PA/FSIC-PA scheme with the existing HSIC-NPA scheme will be carried out.
§.§ Outage probability achieved by HSIC-PA scheme
This subsection provides the exact and asymptotic expressions for the overall outage probability
of the served secondary users achieved by the proposed HSIC-PA scheme. Besides, the diversity gain <cit.> achieved by HSIC-PA is also provided.
Assume that all the secondary users have the same target rate, denoted by R_s. The overall outage probability achieved by the served secondary users in HSIC-PA is given by:
P_out=Pr(max{R^m, 1≤ m≤ M}<R_s).
For the ease of characterizing the outage probability P_out, it is helpful to define the event E_m, which denotes the event that there are m secondary users belonging to type I. Particularly, E_m can be expressed as follows:
E_m={ |h_m |^2< τ (g)/P_s, | h_m+1 | ^2>τ (g)/P_s},
1≤ m≤ M-1,
{|h_1|^2 > τ (g)/P_s}, m=0,
{|h_M|^2 < τ (g)/P_s}, m=M,
where the extreme cases E_0 and E_M denote the events where there is no type I secondary users and all the secondary users belong to type I, respectively.
It is shown that the expression of P_out can be divided into four parts, as highlighted in the following lemma.
For ease of calculation, P_out can be further simplified as:
P_out= P(|h_M|^2>τ(g)/P_s, | h_M |^2< |h_k |^2,R^M_II,2<R_s,|g|^2>α_0)_Q̃_1
+ P(|h_M|^2>τ(g)/P_s, | h_M |^2> |h_k |^2,R^M_II,1<R_s,|g|^2>α_0)_Q̃_2
+ P( E_M,R^M_I<R_s, | g | ^2>α _0)_Q_M + P(
R^M_II<R_s ,|g|^2<α_0) _Q_M+1.
Please refer to Appendix A.
By deriving the expressions of Q̃_1, Q̃_2, Q_M and Q_M+1 as shown in Appendix B, the expression for the overall outage probability of the admitted secondary users in HSIC-PA scheme can be obtained as shown in the following theorem.
The overall outage probability P_out of the admitted secondary users in HSIC-PA can be expressed as follows:
P_out=∑_i=0^M([ M; i ])(-1)^ie^-iα_s1-e^-(α_sP_0i+1)α_1/α_sP_0i+1+(1-e^-α_s)^Me^-α_1,
where ϵ_s=2^R_s-1,
α_s=ϵ_s/P_s,
α_1=(1+ϵ_s)α_0.
Please refer to Appendix B.
Based on Theorem 1, the asymptotic expression for P_out in the high SNR regime can be obtained as shown in the following corollary.
At high SNR, i.e., P_0=P_s→∞, the overall outage probability of the served secondary users in HSIC-PA can be approximated as follows:
P_out≈ϵ_s^M/P_s^MP_0∑_i=1^M([ M; i ])ϵ_0^i+1(1+ϵ_s)^i+1/i+1-ϵ_s^M/P_s^MP_0^2∑_i=0^M([ M; i ])ϵ_0^i+2(1+ϵ_s)^i+2/i+2+ϵ_s^M/P_s^M.
Please refer to Appendix C.
Further, it is straightforward that the first two terms of (<ref>) can be omitted in the high SNR regime, yielding a more simplified expression for P_out, as highlighted in the following corollary.
At high SNR, i.e., P_0=P_s→∞,
the approximation of P_out shown in (<ref>) can be further approximated as follows:
P_out≈ϵ_s^M/P_s^M.
Remark 1. Note that, the existing HSIC-NPA scheme can only avoid outage probability error floors under the constraint that ϵ_0ϵ_s≤ 1, which means that the feasible target rate for reliable transmission of the secondary users is primarily restricted by that of the primary user.
However, from the results shown in Corollary 2, it can be easily concluded that the outage probability error floor can be avoided by HSIC-PA scheme without any constraints on the users' target rates. Hence, the first question raised in Section I can be answered with the answer that it is possible to avoid outage probability error floors without any constraints on users' target rates.
Remark 2. In wireless communications, diversity gain is usually used as an important performance metric to measure how fast the outage probability decreases as transmit power increases <cit.>. It denotes the asymptotic scaling law of the outage probability to the transmit SNR. Specifically, the diversity gain, say d, achieved by HSIC-PA is defined as:
d=-lim_P_s→∞log P_out/log P_s
Based on the results shown in Corollary 2, it can be straightforwardly obtained that
d=M. Therefore, the diversity gain achieved by the HSIC-PA scheme is M, which is exactly the number of the secondary users. Thus, multi-user diversity gain can be fully utilized by the proposed HSIC-PA scheme, which means increasing the number of secondary users is helpful to reduce the overall outage probability.
From the perspective of diversity gain, the difference between the HSIC-NPA scheme and the HSIC-PA scheme can also be revealed. Recall that the diversity gain achieved by HSIC-NPA is also M when ϵ_0ϵ_s≤1, otherwise a diversity gain of zero is realized.
§.§ Outage probability achieved by FSIC-PA scheme
This subsection provides the exact expression for the overall outage probability
of the served secondary users in the proposed FSIC-PA scheme. Asymptotic analysis for the outage probability is also provided.
For the FSIC-PA scheme, the overall outage probability achieved by the served secondary users is defined as:
P̂_out=Pr(max{R̂^m, 1≤ m≤ M}<R_s).
The following theorem provides the closed-form expression for the outage probability achieved by the FSIC-PA scheme.
The overall outage probability P̂_out of the served secondary users in FSIC-PA can be expressed as follows:
P̂_out=1-e^-α_1+(1-e^α_s)^Me^-α_1.
Please refer to Appendix D.
Based on Theorem 2, asymptotic expression for P̂_out in the high SNR regime can be obtained as shown in the following corollary.
At high SNR, i.e., P_0=P_s→∞, the overall outage probability of the served secondary users in the FSIC-PA scheme can be approximated as follows:
P̂_out≈ϵ_0(1+ϵ_s)/P_0+ϵ_s^M/P_s^M-ϵ_s^Mϵ_0(1+ϵ_s)/P_s^MP_0.
By applying Taylor expansion 1-e^-x≈ x (x→ 0), the expression in (<ref>) can be further approximated as follows:
P̂_out≈ α_1+α_s^M-α_s^Mα_1
= ϵ_0(1+ϵ_s)/P_0+ϵ_s^M/P_s^M-ϵ_s^Mϵ_0(1+ϵ_s)/P_s^MP_0,
and the proof is complete.
Remark 3. From Corollary 3, it can be easily observed that the proposed FSIC-PA scheme can also avoid outage probability error floors without any constraints on the users' target rates. At this point, the second question raised in Section I can be answered with the answer that HSIC is not the necessary condition to avoid outage probability error floors.
Remark 4. It is also interesting to investigate the diversity gain achieved by the FSIC-PA scheme, which is defined as:
d̂=-lim_P_s→∞logP̂_out/log P_s.
According to Corollary 3, it can be straightforwardly obtained that d̂=1. Thus,
the multi-user diversity gain cannot be obtained by FSIC-PA scheme.
The above two remarks indicate that even though HSIC is not the necessary strategy to avoid the outage probability error floor, its combination with PA is beneficial for improving transmission robustness.
§.§ Comparisons between HSIC-PA/FSIC-PA scheme with HSIC-NPA scheme
In this section, more detailed comparisons of the proposed two schemes with the benchmark HSIC-NPA scheme are provided. Note that, if the served secondary user belongs to type I, the three schemes, i.e., HSIC-PA, HSIC-NPA and FSIC-PA, achieve the same instantaneous data rate.
However, the three schemes differ from each other if the served secondary user belongs to type II. Thus, it is necessary to compare the three schemes for the case when the served secondary user belongs to type II.
For ease of notation, denote the served secondary user by U_m^*. When U_m^* belongs to type II, denote its achievable rate by R_II, R̂_II and R̅_II for HSIC-PA, FSIC-PA and HSIC-NPA schemes, respectively.
From the description in Section. II, it can be found that R_II≥R̅_II always holds. Thus, it is sufficient to characterize the probability of the event that R_II>R̅_II, for the comparison between HSIC-PA and HSIC-NPA, as presented in the following theorem.
Under the condition that the served secondary user U_m^* is type II, the probability of the event that R_II>R̅_II, termed P^better, is given by:
P^better= P( R̅_2<R_2, U_m^* is type II) /P(U_m^* is type II) ,
where
P( R̅_2<R_2, U_m^* is type II)
= ∑_i=1^M([ M; i ])(-1)^ie^i/P_s[ṽ(α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1) -ũ(α_0,i/P_sα_0 ) ] ,
and
P(U_m^* is type II)=1-∑_i=0^M([ M; i ])(-1)^ie^i/P_sũ(α_0,i/P_sα_0 ),
where
ũ(x,y)=1/y+1e^-x(y+1),
and ṽ(x,y,z)=√(π)e^z^2/4y/2√(y)[1-erf (√(y)(x+z/2y))], where erf(·) denotes the Gaussian error function,
which is given by:
erf(x)=2/√(π)∫_0^xe^-t^2dt.
Please refer to Appendix E.
Differently, for the comparison between FSIC-PA and HSIC-NPA, R̂_II can be either larger or less than R̅_II. Thus, it is necessary to characterize both the probabilities of the events that R̂_II>R̅_II and R̂_II<R̅_II. By noting that
P̂( R̅_2<R̂_2, U_m^* is type II)=P(|h_M|^2>τ(g)/P_s,|h_M|^2< |h_k |^2,|g|^2>α_0),
which is the same as the expression of P( R̅_2< R_2, U_m^* is type II) in Theorem 3, the following theorem can be straightforwardly obtained.
Under the condition that the served secondary user U_m^* is type II, the probability of the event that R̂_II>R̅_II, termed P̂^better, is given by:
P̂^better= P( R̅_2<R̂_2, U_m^* is type II) /P(U_m^* is type II) ,
which is the same as the expression of P^better in Theorem 3. The probability of the event that R̂_II<R̅_II, termed P̂^worse, is given by:
P̂^worse=1-P̂^better.
§ NUMERICAL RESULTS
In this section, simulation results are provided to verify the accuracy of the developed analysis and demonstrate the performance of the proposed HSIC-PA and FSIC-PA schemes. Comparisons with the benchmark HSIC-NPA scheme developed in <cit.> are also provided.
Fig. <ref> verifies the accuracy of the developed analytical results for the outage probability achieved by the proposed HSIC-PA scheme. Note that, the curves for analytical results are based on Theorem 1, and those for Approximations I and II are based on Corollaries 1 and 2, respectively.
As shown in the figure, analytical results perfectly match simulations, which verifies the accuracy of the analytical results provided in Theorem 1.
Besides, Fig. <ref> also shows that both the curves for Approximation I and Approximation II
match the simulation results at high SNR, which verifies the accuracy of the approximations in Corollaries 1 and 2.
Fig. P_outLFJ_F verifies the accuracy of the developed analytical results for the outage probability achieved by the proposed FSIC-PA scheme. Note that the curves for analytical results are based on Theorem 2, and the curves for approximation are based on Corollary 3. From the figure, it can be observed that the curves for analysis perfectly match simulations, which verify the accuracy of the results provided in Theorem 2.
Besides, it is shown that the curves for the approximate results are accurate at high SNR, which demonstrates the accuracy of the results in Corollary 3.
A significant difference between HSIC-PA and FSIC-PA schemes can be clearly observed from Figs. <ref> and <ref>. Fig. <ref> shows that as M increases, the outage probability achieved by HSIC-PA scheme significantly decreases. In contrast, Fig. <ref> shows that,
for M>1, the outage probabilities for different values of M coincide. Thus, keeping increasing M cannot improve the outage performance of FSIC-PA in the high SNR regime. This observation is consistent with
the results in Section III that the diversity gain of HSIC-PA scheme is M, while that of FSIC-PA scheme is only 1.
Fig. <ref> shows the outage probabilities of the secondary users achieved by HSIC-NPA, HSIC-PA and FSIC-PA versus transmit SNR. As shown in the figure, for HSIC-NPA scheme, when R_0=1 BPCU, there is no outage probability error floor. However, when R_0=4 BPCU, the outage probability error floor exists. This observation is consistent with the conclusions in <cit.>,
i.e., the error floor can only be avoided when ϵ_0ϵ_s<1. By contrast, the proposed HSIC-PA and FSIC-PA schemes can avoid outage probability error floors, since the outage probabilities achieved by both schemes continuously decrease as the SNR increases. Fig. <ref> also shows that the HSIC-PA scheme performs the best among the three schemes for all cases. However, FSIC-PA achieves larger outage probabilities than HSIC-NPA when R_0=1 BPCU, while for the case where R_0=4 BPCU, FSIC-PA performs better at high SNRs.
Fig. <ref> shows the performance of the three schemes in terms of ergodic data rates achieved by the served secondary users.
From the figure, it is shown that HSIC-PA scheme always achieves the largest ergodic rate among the three schemes, which is consistent with the observation in Fig. <ref>.
Another interesting observation from Fig. <ref> is that the performance of FSIC-PA approaches that of HSIC-PA in terms of ergodic data rate at high SNR, while the performance of HSIC-NPA approaches that of HSIC-PA in terms of ergodic rate at low SNRs. This observation indicates that it is preferable to set the
secondary user at the first stage of SIC and use full transmit power at low SNRs, while it is preferable to set the secondary user at the second stage of SIC and use partial transmit power at high SNRs.
Fig. <ref> and Fig. <ref> demonstrate a more detailed comparison on achievable rates of the proposed two schemes with the benchmark HSIC-NPA scheme.
Fig. <ref> shows the probability that the served secondary user belongs to type II. It is shown that as SNR increases, the probabilities converge to a constant.
Fig. <ref> shows that the curves for P̂^better and P^better coincide, which is consistent with results shown in Theorems 3 and 4.
Fig. <ref> also shows that P̂^better and P^better increase with SNR, and approach 1 in the high SNR regime. While
P̂^worse decreases with SNR and approaches 1 in the low SNR regime.
The above observation can help to understand the phenomenon shown in Fig. <ref> and
Fig. <ref>, and leads to the following suggestions for practical systems.
On the one hand, at high SNR, it is preferable to apply power adaptation and put the secondary user at the second stage of SIC. On the other hand, at low SNR, it is better to decode the secondary user at the first stage of SIC.
Fig. <ref> shows the power consumption of HSIC-PA and FSIC-PA schemes. Note that the HSIC-NPA scheme always chooses full power to transmit for the secondary users, i.e., β is always set to be 1, while β can be set to be less than 1 in the proposed HSIC-PA and FSIC-PA schemes. Thus, HSIC-NPA is more energy consuming than the proposed two schemes in this paper. From the figure, it can be observed that at low SNRs, β approaches 1 in HSIC-PA and β approaches zero in FSIC-PA. Besides, as SNR increases, β decreases in HSIC-PA, while that in FSIC-PA increases. More interestingly, the values of β for both schemes approach a constant in the high SNR regime. However, at high SNR, the value of β in HSIC-PA scheme is a bit higher than that in FSIC-PA.
§ CONCLUSIONS
In this paper, two novel cognitive radio inspired uplink NOMA schemes were proposed to improve transmission robustness, namely HSIC-PA scheme and FSIC-PA scheme. Rigorous analysis has been developed to characterize the performance of the proposed schemes. It has been shown that both HSIC-PA and FSIC-PA schemes can avoid outage probability error floors in the high SNR regime without any constraints on users' target rates, which was thought impossible for uplink NOMA transmission. It has also been shown that the diversity gain achieved by the HSIC-PA scheme is M, which is the maximal multi-user diversity gain for the considered scenario. While the diversity gain achieved by the FSIC-PA scheme is 1. Numerical results have been presented to verify the accuracy of the developed analysis and demonstrate the superior performance of the proposed schemes. It has been shown by this paper that the combination of HSIC and PA is important to improve the transmission robustness of uplink NOMA.
§ PROOF FOR LEMMA 1
The outage events can be divided into two groups, one is |g|^2>α_0 and the other is |g|^2<α_0.
Thus, the outage probability P_out shown in (<ref>) can be written as:
P_out= ∑_m=1^M-1 P ( E_m,max{R^k_I, 1 ≤ k≤ m}<R_s,
max{ R^k_II,m < k ≤ M } <R_s, | g | ^2>α _0 )
+P( E_M,max{ R^k_I,1≤ k ≤ M } <R_s, | g | ^2>α _0)
+P( E_0,max{ R^k_II,1≤ k ≤ M } <R_s, | g | ^2>α _0)
+P(max{ R^k_II,1≤ k ≤ M } <R_s ,|g|^2<α_0).
Recall that the secondary users are ordered according to their channel gains, P_out can be further written as:
P_out= ∑_m=1^M-1 P ( E_m,R^m_I< R_s,R^M_II < R_s, | g | ^2>α _0 )_Q_m
+P( E_M,R^M_I<R_s, | g | ^2>α _0)_Q_M +P( E_0,R^M_II <R_s, | g | ^2>α _0) _Q_0
+ P(
R^M_II<R_s ,|g|^2<α_0) _Q_M+1.
Note that when |g|^2>α _0, R^M_II can be determined according to the value of |h_M |^2 as follows:
R^M_II
=
R^M_II,2 , |h_M |^2< |h |^2
R^M_II,1 , |h_M |^2> |h |^2,
where |h |^2=( | g |^2α_0^-1-1 )(P_0 | g |^2+1)/P_s. Thus, Q_m can be rewritten as follows:
Q_m= ∑_m=1^M-1P (E_m,R^m_I<R_s,R^M_II,2<R_s, | h_M |^2< |h |^2 ,|g|^2>α_0 ) _Q_m,1
+P ( E_m, R^m_I<R_s,R^M_II,1<R_s,
| h_M |^2> |h |^2 ,|g|^2>α_0 ) _Q_m,2.
By noting that regardless of the value of |h_M |^2, R^m_I is always smaller than R^M_II,1 and R^M_II,2,
Q_m can be further simplified as:
Q_m= ∑_m=1^M-1P (E_m,R^M_II,2<R_s, | h_M |^2< |h |^2 ,|g|^2>α_0 ) _Q_m,1
+P ( E_m,R^M_II,1<R_s,
| h_M |^2> |h |^2 ,|g|^2>α_0 ) _Q_m,2.
By applying the results shown in (<ref>), Q_0 can be rewritten as follows:
Q_0= P( E_0, |h_M |^2< |h |^2, R^M_II,2 <R_s, | g | ^2>α _0)_Q_0,1
+ P( E_0, |h_M |^2> |h |^2, R^M_II,1 <R_s, | g | ^2>α _0)_Q_0,2.
Note that, Q_m,1 and Q_0,1 can be combined, so as Q_m,2 and Q_0,2, thus, the sum of Q_m and Q_0 can be simplified as follows:
Q_m+Q_0= Q_m,1+Q_0,1_Q̃_1 +Q_m,2+Q_0,2_Q̃_2
= P(|h_M|^2>τ(g)/P_s, | h_M |^2< |h |^2,R^M_II,2<R_s,|g|^2>α_0)_Q̃_1
+ P(|h_M|^2>τ(g)/P_s, | h_M |^2> |h |^2,R^M_II,1<R_s,|g|^2>α_0)_Q̃_2.
Therefore, P_out=Q_m+Q_0+Q_M+Q_M+1=Q̃_1+Q̃_2+Q_M+Q_M+1 and the proof is complete.
§ PROOF FOR THEOREM 1
According to Lemma 1, the evaluation of P_out can be divided into four parts: Q̃_1,
Q̃_2,
Q_M
and Q_M+1.
§.§ Evaluation of Q̃_1
Note that Q̃_1 can be expressed as follows:
Q̃_1= P (|h_M|^2>|g|^2α_0^-1-1/P_s,|h_M|^2<(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s,log(1+τ(g))<R_s,|g|^2>α_0)
= α_0<|g|^2<α_1ε{P(|g|^2α_0^-1-1/P_s<|h_M|^2<(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s)_S̃_1},
where ε{ * } denotes the mathematical expectation.
Note that the users are ordered according to their channel gains, and hence the probability density function (pdf) of |h_M|^2 can be expressed as:
f_|h_M|^2(x)= M!/(M-1)!(1-e^-x)^M-1e^-x
= M(1-e^-x)^M-1e^-x.
By applying (<ref>), S̃_1 can be evaluated as follows:
S̃_1= ∫_|g|^2α_0^-1-1/P_s^(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_sf_|h_M|^2(x)dx
= ∑_i=0^M([ M; i ])(-1)^i( e^-i(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s-e^-i/P_s(|g|^2α_0^-1-1)).
Further, by noting that |g|^2 is exponentially distributed, Q̃_1 can be calculated as:
Q̃_1= ∫_α_0^α_1S̃_1e^-|g|^2d|g|^2
= ∑_i=0^M([ M; i ])(-1)^i∫_α_0^α_1( e^-i(xα_0^-1-1)(P_0x+1)/P_s-e^-i/P_s(xα_0^-1-1))e^-xdx.
For notational simplicity,
define u(α_0,α_1,c) as:
u(α_0,α_1,c)△=∫_α_0^α_1e^-(c+1)xdx= 1/c+1[e^-α_0(c+1)-e^-α_1(c+1)],
and v(α_1,α_0,A,B) as:
v(α_1,α_0,A,B)△= ∫_α_0^α_1e^-(Ax^2+Bx)dx
= √(π)e^B^2/4A/2√(A)[erf(√(A)(α_1+B/2A))-erf(√(A)(α_0+B/2A))].
By taking (<ref>) and (<ref>) into (<ref>), Q̃_1 can be expressed as:
Q̃_1=∑_i=0^M([ M; i ])(-1)^ie^i/P_s[ v(α_1,α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1)- u(α_0,α_1,i/P_sα_0)].
§.§ Evaluation of Q̃_2
Note that Q_2 can be expressed as follows:
Q̃_2= P (|h_M|^2>|g|^2α_0^-1-1/P_s,|h_M|^2>(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s,.
.log(1+P_s|h_M|^2/P_0|g|^2+1)<R_s,|g|^2>α_0)
(a)= α_0<|g|^2<α_1ε{P((|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s<|h_M|^2<α_s(P_0|g|^2+1) )_S̃_2},
where step (a) is obtained by noting the hidden condition (|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s<α_s(P_0|g|^2+1), which yields |g|^2<α_1.
By using the pdf of |h_M|^2 shown in (<ref>), S̃_2 can be evaluated as follows:
S̃_2= ∫_(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s^α_s(P_0|g|^2+1)M(1-e^-x)^M-1e^-xdx
= ∑_i=0^M([ M; i ])(-1)^i( e^-iα_s(P_0|g|^2+1)-e^-i/P_s(|g|^2α_0^-1-1)(P_0|g|^2+1)).
Further, by averaging with respect to |g|^2, Q̃_2 can be expressed as:
Q̃_2= ∑_i=0^M([ M; i ])(-1)^i∫_α_0^α_1( e^-iα_s(P_0x+1)-e^-i/P_s(xα_0^-1-1)(P_0x+1))e^-xdx.
By taking (<ref>) and (<ref>) into (<ref>), Q̃_2 can be further expressed as follows:
Q̃_2=∑_i=0^M([ M; i ])(-1)^i [e^-iα_su (α_0,α_1,iα_sP_0)- e^i/P_sv (α_1,α_0,iP_0/P_sα_0,i/P_s(α_0^-1- P_0)+1)].
§.§ Evaluation of Q_M
Note that Q_M can be rewritten as follows:
Q_M= P(|h_M|^2<|g|^2α_0^-1-1/P_s,log(1+P_s|h_M|^2)<R_s,|g|^2>α_0)
= P(|h_M|^2<min{|g|^2α_0^-1-1/P_s,α_s},|g|^2>α_0)
= α_0<|g|^2<α_1ε{ P(|h_M|^2<α_0^-1|g|^2-1/P_s) _S_M,1} +|g|^2>α_1ε{P(|h_M|^2<α_s)_ S_M,2},
where the last step is obtained by dividing the events into two cases, i.e., |g|^2<α_1 and |g|^2>α_1.
By using the pdf of |h_M|^2 shown in (<ref>), the expression for S_M,1 and S_M,2 can be obtained as:
S_M,1=(1-e^-α_0^-1|g|^2-1/P_s)^M and S_M,2=(1-e^-α_s)^M.
By averaging with respect to |g|^2, Q_M can be further evaluated as follows:
Q_M= ∫_α_0^α_1(1-e^-α_0^-1x-1/P_s)^Me^-xdx+∫_α_1^∞(1-e^-α_s)^Me^-xdx
= ∫_α_0^α_1∑_i=0^M([ M; i ])(-1)^ie^i/P_se^-α_0^-1/P_sixe^-xdx+(1-e^-α_s)^Me^-α_1
= ∑_i=0^M([ M; i ])(-1)^ie^i/P_su (α_0,α_1,i/α_0P_s)+(1-e^-α_s)^Me^-α_1,
where the last step is obtained by applying the results shown in (<ref>).
§.§ Evaluation of Q_M+1
Note that Q_M+1 can be expressed as follows:
Q_M+1=P(R^M_II<R_s ,|g|^2<α_0).
Note that, when |g|^2<α_0, τ(g)=0, yielding R^M_II=log(1+ P_s|h_M|^2/P_0|g|^2+1).
Thus, Q_M+1 can be further expressed as:
Q_M+1 = P(|g|^2<α_0,log(1+ P_s|h_M|^2/P_0|g|^2+1)<R_s)
= |g|^2<α_0ε{ P( log(1+ P_s|h_M|^2/P_0|g|^2+1)<R_s)_S_M+1}.
By using the pdf |h_M|^2 shown in (<ref>), S_M+1 can be evaluated as follows:
S_M+1= ∫_0^α_s(P_0|g|^2+1)f_|h_M|^2(x)dx
= (1-e^-α_s(P_0|g|^2+1))^M.
Further, by averaging with respect to |g|^2, Q_M+1 can be expressed as:
Q_M+1 = ∫_0^α_0(1-e^-α_s(P_0x+1))^Me^-xdx
= ∑_i=0^M([ M; i ])(-1)^ie^-α_si1-e^-(α_sP_0i+1)α_0/α_sP_0i+1,
where the last step is obtained by applying the binomial expansion.
Therefore, the expressions for Q̃_1,
Q̃_2,
Q_M,
and Q_M+1 are obtained, and the proof is complete.
§ PROOF FOR COROLLARY 1
In order to facilitate a high SNR approximation, P_out in (<ref>) can be
rewritten as follows:
P_out=∑_i=0^M([ M; i ])(-1)^i∫_0^α_1e^-xe^-iα_s(P_0x+1)dx+(1-e^-α_s)^Me^-α_1.
By using the fact that
∑_i=0^M([ M; i ])(-1)^iA^i=(1-A)^M,
P_out can be further approximated as follows:
P_out= ∫_0^α_1e^-x(1-e^-α_s(P_0x+1))^Mdx+(1-e^-α_s)^Me^-α_1
≈ ∫_0^α_1(1-x)α_s^M(P_0x+1)^Mdx+α_s^M(1-α_1),
where the last step is obtained by applying Taylor seizes 1-e^-x≈ x when x→ 0.
A more simplified form of P_out can be obtained by applying the binomial expansion:
P_out≈ α_s^M∫_0^α_1(1-x)∑_i=0^M([ M; i ])P_0^ix^idx+α_s^M(1-α_1)
= α_s^M∫_0^α_1∑_i=0^M([ M; i ])P_0^i(x^i-x^i+1)dx+α_s^M(1-α_1).
By taking integrations in (<ref>), P_out can be further calculated as follows:
P_out≈ α_s^M∑_i=0^M([ M; i ])P_0^i(α_1^i+1/i+1-α_1^i+2/i+2)+α_s^M-α_s^Mα_1
(a)= ϵ_s^M/P_s^MP_0∑_i=0^M([ M; i ])ϵ_0^i+1(1+ϵ_s)^i+1/i+1-ϵ_s^M/P_s^MP_0^2∑_i=0^M([ M; i ])ϵ_0^i+2(1+ϵ_s)^i+2/i+2
+ϵ_s^M/P_s^M-ϵ_s^Mϵ_0(1+ϵ_s)/P_s^MP_0
(b)= ϵ_s^M/P_s^MP_0∑_i=1^M([ M; i ])ϵ_0^i+1(1+ϵ_s)^i+1/i+1-ϵ_s^M/P_s^MP_0^2∑_i=0^M([ M; i ])ϵ_0^i+2(1+ϵ_s)^i+2/i+2+ϵ_s^M/P_s^M,
where step (b) is obtained by the fact that the first term shown in step (a) is ϵ_s^Mϵ_0(1+ϵ_s)/P_s^MP_0 when i=0, which is exactly the same as the the last term in step (a), and thus can be eliminated .
§ PROOF FOR THEOREM 2
Divide the outage events into two cases, one being |g|^2>α_0 and the other being |g|^2<α_0. . Therefore, the outage probability P̂_out shown in (<ref>) can be rewritten as:
P̂_out= ∑_m=1^M-1 P ( E_m,max{R̂^k_I, 1 ≤ k≤ m}<R_s,
max{R̂^k_II,m < k ≤ M } <R_s, | g | ^2>α _0 )
+P( E_M,max{R̂^k_I,1≤ k ≤ M } <R_s, | g | ^2>α _0)
+P( E_0,max{R̂^k_II,1≤ k ≤ M } <R_s, | g | ^2>α _0)
+P(max{R̂^k_II,1≤ k ≤ M } <R_s ,|g|^2<α_0).
Recall that the secondary users are ordered according to their channel gains,
P̂^out can be further written as:
P̂_out= ∑_m=1^M-1P(E_m,R̂_I^m<R_s,R̂_II^M<R_s,|g|^2>α_0)_F_m
+P(E_M,R̂_I^M<R_s,|g|^2>α_0 )_F_M
+ P(E_0,R̂_II^M<R_s,|g|^2>α_0 )_F_0
+P(R̂_II^M<R_s,|g|^2<α_0 )_F_M+1.
By noting that R̂^m_I<R̂^M_II for the first term, F_m and F_0 can be combined as follows:
F_m+F_0=P(|h_M|^2>τ(g)/P_s,R̂^M_II<R_s,
|g|^2>α_0)_F̃.
Therefore, P̂_out can be further simplified as:
P̂_out= P(|h_M|^2<τ(g)/P_s,R̂^M_I<R_s
,|g|^2>α_0)_F_M
+P(R̂_II^M<R_s,|g|^2<α_0)_F_M+1
+P(|h_M|^2>τ(g)/P_s,
R̂^M_II<R_s,|g|^2>α_0)_F̃.
Thus the remaining task is to derive the expressions for F_M, F_M+1 and F̃, respectively.
§.§ Evaluation of F_M
Note that F_M can be expressed as follows:
F_M= P(|h_M|^2<|g|^2α_0^-1-1/P_s,
log(1+P_s|h_M|^2)<R_s,|g|^2>α_0 ),
which is the same as the expression for Q_M in (<ref>). Thus, F_M can be expressed as:
F_M=∑_i=0^M([ M; i ])(-1)^ie^i/P_su(α_0,α_1,
i/α_0P_s)+(1-e^-α_s)^Me^-α_1.
§.§ Evaluation of F_M+1
Note that F_M+1 can be expressed as follows:
F_M+1= P(log(1+τ(g))<R_s,|g|^2<α_0)
(a)= P(|g|^2<α_0)
= 1-e^-α_0,
where step (a) is obtained by the fact that τ(g)=0 when |g|^2<α_0.
§.§ Evaluation of F̃
Note that F̃ can be expressed as follows:
F̃= P(|h_M|^2>τ(g)/P_s,
log(1+τ(g))<R_s,|g|^2>α_0)
= α_0<|g|^2<α_1ε{P(|h_M|^2>|g|^2α_0^-1-1/P_s)_T̃}.
By using the pdf of |h_M|^2 shown in (<ref>), T̃ can be evaluated as follows:
T̃= ∫_|g|^2α_0^-1-1/P_s^∞
M(1-e^-x)^M-1e^-xdx
= 1-∑_i=0^M([ M; i ])(-1)^ie^-i/P_sα_0|g|^2+i/P_s.
By taking expectation with respect to |g|^2, F̃ can be further evaluated as follows:
F̃= ∫_α_0^α_1( 1-∑_i=0^M([ M; i ])(-1)^ie^-i/P_sα_0x+i/P_s) e^-xdx
= e^-α_0-e^-α_1-∑_i=0^M([ M; i ])(-1)^ie^i/P_su(α_0,α_1,i/P_sα_0).
Until now, the expressions for F_M, F_M+1 and F̃ are obtained, and the proof is complete.
§ PROOF FOR THEOREM 3
Note that the numerator in (<ref>) can be rewritten as:
P( R̅_2<R_2, U_m^* is type II)_Q_n
= |g|^2>α_0ε{P(|g|^2α_0^-1-1/P_s<|h_M|^2
<(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s)_S_n}.
By using the pdf of |h_M|^2 shown in (<ref>), S_n can be evaluated as follows:
S_n= ∫_|g|^2α_0^-1-1/P_s^(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_sM(1-e^-x)^M-1e^-xdx
= ∑_i=0^M([ M; i ])(-1)^i( e^-i(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s-e^-i/P_s(|g|^2α_0^-1-1)).
Further, by averaging with respect to |g|^2, Q_n can be expressed as follows:
Q_n= ∑_i=0^M([ M; i ])(-1)^i∫_α_0^∞( e^-i(xα_0^-1-1)(P_0x+1)/P_s-e^-i/P_s(xα_0^-1-1))e^-xdx.
By taking (<ref>) and (<ref>) into (<ref>), Q_n can be further expressed as follows:
Q_n= ∑_i=0^M([ M; i ])(-1)^ie^i/P_s[ v(∞,α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1)-u(α_0,∞,i/P_sα_0)]
= ∑_i=1^M([ M; i ])(-1)^ie^i/P_s[ ṽ(α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1)
-ũ(α_0,i/P_sα_0)],
where the last step is obtained by noting that the term i=0 can be omitted since ṽ(α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1)
-ũ(α_0,i/P_sα_0)
=0 for i=0.
The denominator in (<ref>) can be calculated as follows:
P(U_m^* is type II)_Q_d
= P(|h_M|^2>τ(g)/P_s,|g|^2>α_0)_Q_d1
+P( |g|^2<α_0)_Q_d2
= |g|^2>α_0ε{P(
|h_M|^2>|g|^2α_0^-1-1/P_s)_S_d1}
+Q_d2.
Note that S_d1 is the same as the expression for T̃ in (<ref>). Thus, S_d1 can be obtained by using the
results in (<ref>) as follows:
S_d1= 1-∑_i=0^M([ M; i ])(-1)^ie^-i/P_sα_0|g|^2+i/P_s.
By averaging with respect to |g|^2, Q_d1 can be further evaluated as follows:
Q_d1= ∫_α_0^∞( 1-∑_i=0^M([ M; i ])(-1)^ie^-i/P_sα_0x+i/P_s) e^-xdx
= e^-α_0-∑_i=0^M([ M; i ])(-1)^ie^i/P_sũ(α_0,i/P_sα_0).
Q_d2 can be expressed as follows:
Q_d2=∫_0^α_0e^-xdx=1-e^-α_0.
Thus, Q_d is the sum of Q_d1 and Q_d2, which can be expressed as follows:
Q_d= 1-∑_i=0^M([ M; i ])(-1)^ie^i/P_sũ(α_0,i/P_sα_0).
Therefore, the expressions for P( R̅_2<R_2, U_m^* is type II) and P(U_m^* is type II) are obtained, and the proof is complete.
IEEEtran
|
http://arxiv.org/abs/2307.04715v1 | 20230710172504 | CVPR MultiEarth 2023 Deforestation Estimation Challenge:SpaceVision4Amazon | [
"Sunita Arya",
"S Manthira Moorthi",
"Debajyoti Dhar"
] | cs.CV | [
"cs.CV",
"eess.IV"
] |
CVPR MultiEarth 2023 Deforestation Estimation Challenge: SpaceVision4Amazon
Sunita AryaCorresponding author. , S Manthira Moorthi, Debajyoti Dhar
Signal and Image Processing Area
Space Applications Centre, Ahmedabad
[email protected],{smmoorthi,deb}@sac.isro.gov.in
=============================================================================================================================================================================================================
In this paper, we present a deforestation estimation method based on attention guided UNet architecture using Electro-Optical (EO) and Synthetic Aperture Radar (SAR) satellite imagery. For optical images, Landsat-8 and for SAR imagery, Sentinel-1 data have been used to train and validate the proposed model. Due to the unavailability of temporally and spatially collocated data, individual model has been trained for each sensor. During training time Landsat-8 model achieved training and validation pixel accuracy of 93.45% and Sentinel-2 model achieved 83.87% pixel accuracy. During the test set evaluation, the model achieved pixel accuracy of 84.70% with F1-Score of 0.79 and IoU of 0.69.
§ INTRODUCTION
Estimation of deforestation level for Amazon Rainforest is very important for monitoring the forest change and for climate change. Degradation in forest area of Amazon can effect the global climate as Amazon Rainforest represents 40%<cit.> of tropical forest on Earth. Observation of Earth surface in any weather and lighting conditions using Electro-Optical (EO) and Synthetic Aperture Radar (SAR) sensors can be useful to monitor and analyse the change in Amazon Rainforest.
Deep learning based architectures have shown impressive results in deforestation estimation task using optical as well as SAR imagery. Various models based on standard UNet and attention guided UNet architecture have been explored to segment the forest and deforested area. UNet based model has been explored in <cit.> and used it for semantic segmentation of amazon forest cover and authors of <cit.> has also used this model for other forest ecosystem using Sentinel-2 imagery. Landsat-8 based satellite imagery has been also used to detect the deforestation in Amazon <cit.>. <cit.> have used both Sentinel-2 and Landsat-8 images for deforestation detection in the amazon forest using fully convolutional based network. For SAR data, Sentinel-1 bands has been also used for semantic segmentation <cit.>.
With the advantage of attention mechanism for various computer vision tasks, authors of <cit.> have implemented and analyzed the performance of attention UNet for semantic segmentation using Sentinel-2 satellite imagery.
For this challenge, we have used Sentinel-1 and Landsat-8 imagery provided as a part of MultiEarth 2023 Deforestation Estimation Challenge <cit.> for an attention guided UNet model. The proposed model has been tested on the given test set for evaluation.
The remainder of this work is organized as follows. Section <ref> described about the dataset used and Section <ref> introduced the methodology adopted with data pre-processing and post-processing steps. The results are presented in Section <ref> and conclusion is shown in Section <ref>.
§ DATASET
Table <ref> represents the description about the given training dataset for the challenge <cit.>. The spatial coverage of Amazon rain forest for this challenge is [-3.33to -4.39] for latitude and [-54.48to -55.2] for longitude. The challenge dataset consists of Sentinel-1, Sentinel-2, Landsat-5, Landsat-8 and the labels for training as shown in Table <ref>. As the deforestation labels contains data from 2016 to 2021, we took the satellite data which are common to this range. Landsat-5 data has been discarded for this work because of not common range of year. Out of Landsat-8 and Sentinel-2 optical images, we took only Landsat-8 data. Sentinel-1 data to handle the cloudy images. Sentinel-1 has two polarization bands: VV and VH with spatial resolution of 10m and Landsat-8 has seven surface reflectance bands with one surface temperature band at a spatial resolution of 30m.
After careful analysis of the provided challenge data, we finally considered Landsat-8 for optical imagery and Sentinel-1 for SAR imagery at a unified resolution. Table <ref> represents the description about the dataset taken for training the model.
§ METHODOLOGY
This sections describes about the data pre-processing steps, model architecture with training details and finally data post-processing steps taken to refine the generated results.
§.§ Data Pre-Processing
Landsat-8: In electromagnetic spectrum range, every spectral channel has it's own significance for specific target. So, rather than considering only major vegetation influencing bands, we took all the spectral bands for this work. We considered both surface reflectance and surface temperature bands. For training, all the bands have been normalized between 0 and 1. Additionally, we resampled the data to make unified resolution with the spatial resolution of given label images. After data pre-processing step, total number of training samples taken are 6313.
Sentinel-1: For Sentinel-1, we considered both the given bands i.e. VV and VH bands to train and validate the model. In addition to that we have taken a third band by taking ratio of both the given channels. We have used 1% percentage bandwise contrast stretching as suggested in <cit.> for normalizing the SAR data and then converting the it to [0-1] for training. Samples taken for final training the model are 18014.
§.§ Network and Training Details
To segment the forest and deforested area in this work, we took pairwise training images with their respective labels as described in Section 2. Our model is inspired by the <cit.> work. Figure <ref> represents the model parameters including attention gate. For training, we used batch size of 16, Adam optimizer <cit.> with learning rate of 0.0001 for 50 epochs. To handle the class imbalance, we combined Binary Cross-Entropy loss <cit.> with Dice loss <cit.> with an equal weight to both the losses. Additionally, data augmentation such as rotation, horizontal and vertical flip have been used for Landsat-8 model only.
§.§ Output Refinement
* Indices Based : For Landsat-8 generated results, we have used Normalized Difference Vegetation Index (NDVI) for discarding the cloudy images mask. We took 0.1 as a threshold value to generate the NDVI based cloud mask. For test images which has value greater than 1%, we discarded it as cloudy image and did not consider it for the next step of deforestation mask generation.
* Morphological Operator: After discarding the cloudy images for Landsat-8, we averaged out all the remaining masks taking 0.4 as a threshold for deforestation mask generation. Finally, we used morphological operator erosion followed by dilation to refine the final masks as explored in <cit.>. For Sentinel-1, we averaged the generated masks for given test query and applied the morphological operators in same order.
§ RESULTS
For evaluation, additional data of both the sensors has given with the test queries. The test set consists of 1000 queries from August 2016 to August 2021 for latitude range from -3.87 to -4.39 and longitude range from -54.8 to -54.88. There are the cases where few locations and dates were not available for Landsat-8, but available in Sentinel-1 data. Table <ref> represents the results of test queries on the evaluation server. Figure <ref> represents final results using Landsat-8 model and Figure <ref> shows results of Sentinel-1 model.
SpaceVision4Amazon: For submission version SpaceVision4Amazon we directly averaged out all results of available images for the given test queries using Landsat-8 and Sentinel-1 imagery.
SpaceVision4Amazon_v2: For submission version SpaceVision4Amazon_v2 we followed the output refinement steps as discussed in Section 3 for both Landsat-8 and Sentinel-1 dataset.
Model is implemented in Python v3.7.4 using Keras <cit.> v2.4.3 with Tensorflow <cit.> v2.3.1 backend, and hardware configuration with CPU Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz with 1TB of memory and Quadro V100 GPU with 32GB memory.
§ CONCLUSION
In this paper, we present a deforestation estimation method for Amazon rainforest under the MultiEarth 2023 sub challenge. Our method based on attention guided UNet architecture for provided Optical and Synthetic Aperture Radar (SAR) satellite imagery. For optical images, Landsat-8 and for SAR imagery, Sentinel-1 data have been used to train and validate the proposed model. The model achieved pixel accuracy of 84.70% with F1-Score of 0.79 and IoU of 0.69 on the test set for evaluation.
ieee_fullname
10=-1pt
abadi2016tensorflow
Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen,
Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al.
Tensorflow: Large-scale machine learning on heterogeneous distributed
systems.
arXiv preprint arXiv:1603.04467, 2016.
bragagnolo2021amazon
Ld Bragagnolo, Roberto Valmir da Silva, and José Mario Vicensi Grzybowski.
Amazon forest cover change mapping based on semantic segmentation by
u-nets.
Ecological Informatics, 62:101279, 2021.
cha2023multiearth
Miriam Cha, Gregory Angelides, Mark Hamilton, Andy Soszynski, Brandon Swenson,
Nathaniel Maidel, Phillip Isola, Taylor Perron, and Bill Freeman.
Multiearth 2023 – multimodal learning for earth and environment
workshop and challenge, 2023.
chollet2015keras
François Chollet et al.
Keras.
<https://keras.io>, 2015.
de2020change
Pablo Pozzobon De Bem, Osmar Abílio de Carvalho Junior, Renato
Fontes Guimarães, and Roberto Arnaldo Trancoso Gomes.
Change detection of deforestation in the brazilian amazon using
landsat data and convolutional neural networks.
Remote Sensing, 12(6):901, 2020.
hubbell2008many
Stephen P Hubbell, Fangliang He, Richard Condit, Luís Borda-de Água,
James Kellner, and Hans Ter Steege.
How many tree species are there in the amazon and how many of them
will go extinct?
Proceedings of the National Academy of Sciences,
105(supplement_1):11498–11504, 2008.
isaienkov2020deep
Kostiantyn Isaienkov, Mykhailo Yushchuk, Vladyslav Khramtsov, and Oleg
Seliverstov.
Deep learning for regular change detection in ukrainian forest
ecosystem with sentinel-2.
IEEE Journal of Selected Topics in Applied Earth Observations
and Remote Sensing, 14:364–376, 2020.
john2022attention
David John and Ce Zhang.
An attention-based u-net for detecting deforestation within satellite
sensor imagery.
International Journal of Applied Earth Observation and
Geoinformation, 107:102685, 2022.
kingma2014adam
Diederik P Kingma and Jimmy Ba.
Adam: A method for stochastic optimization.
arXiv preprint arXiv:1412.6980, 2014.
lee2022multiearth
Dongoo Lee and Yeonju Choi.
Multiearth 2022 deforestation challenge–forestgump.
arXiv preprint arXiv:2206.10831, 2022.
oktay2018attention
Ozan Oktay, Jo Schlemper, Loic Le Folgoc, Matthew Lee, Mattias Heinrich,
Kazunari Misawa, Kensaku Mori, Steven McDonagh, Nils Y Hammerla, Bernhard
Kainz, et al.
Attention u-net: Learning where to look for the pancreas.
arXiv preprint arXiv:1804.03999, 2018.
vscepanovic2021wide
Sanja Šćepanović, Oleg Antropov, Pekka Laurila, Yrjo Rauste,
Vladimir Ignatenko, and Jaan Praks.
Wide-area land cover mapping with sentinel-1 imagery using deep
learning semantic segmentation models.
IEEE Journal of Selected Topics in Applied Earth Observations
and Remote Sensing, 14:10357–10374, 2021.
sudre2017generalised
Carole H Sudre, Wenqi Li, Tom Vercauteren, Sebastien Ourselin, and M
Jorge Cardoso.
Generalised dice overlap as a deep learning loss function for highly
unbalanced segmentations.
In Deep Learning in Medical Image Analysis and Multimodal
Learning for Clinical Decision Support: Third International Workshop, DLMIA
2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with
MICCAI 2017, Québec City, QC, Canada, September 14, Proceedings 3, pages
240–248. Springer, 2017.
torres2021deforestation
Daliana Lobo Torres, Javier Noa Turnes, Pedro Juan Soto Vega, Raul Queiroz
Feitosa, Daniel E Silva, Jose Marcato Junior, and Claudio Almeida.
Deforestation detection with fully convolutional networks in the
amazon forest from landsat-8 and sentinel-2 images.
Remote Sensing, 13(24):5084, 2021.
yi2004automated
Ma Yi-de, Liu Qing, and Qian Zhi-Bai.
Automated image segmentation using improved pcnn model based on
cross-entropy.
In Proceedings of 2004 International Symposium on Intelligent
Multimedia, Video and Speech Processing, 2004., pages 743–746. IEEE, 2004.
|
http://arxiv.org/abs/2307.05412v2 | 20230711162734 | Does conditional entropy squeezing indicate normalized entropic uncertainty relation steering? | [
"A-S. F. Obada",
"M. Y. Abd-Rabbou",
"Saeed Haddadi"
] | quant-ph | [
"quant-ph"
] |
/łϕøρ∂ω∇Φ
apssampMathematics Department, Faculty of Science, Al-Azher University, Nassr City 11884, Cairo, [email protected] Department, Faculty of Science, Al-Azher University, Nassr City 11884, Cairo, [email protected] of Physics, Semnan University, P.O.Box 35195-363, Semnan, IranSaeed's Quantum Information Group, P.O.Box 19395-0560, Tehran, Iran
/łϕøρ∂ω∇Φ
A novel approach is introduced to assess one-way Normalized Entropic Uncertainty Relation (NEUR)-steering in a two-qubit system by utilizing an average of conditional entropy squeezing. The mathematical expressions of conditional entropy squeezing and NEUR-steering are derived and presented. To gain a better understanding of the relationship between the two measures, a comparative analysis is conducted on a set of two-qubit states. Our results reveal that the two measures exhibit complete similarity when applied to a maximally entangled state, while they display comparable behavior with minor deviations for partially entangled states. Additionally, it is observed that the two measures are proportionally affected by some quantum processes such as acceleration, noisy channels, and swapping. As a result, the average of conditional entropy squeezing proves to be an effective indicator of NEUR-steering.
Does conditional entropy squeezing indicate normalized entropic uncertainty relation steering?
Saeed Haddadi
August 12, 2023
==============================================================================================
§ INTRODUCTION
In 1935, Schrödinger tried to interpret the Einstein-Podolsky-Rosen (EPR) paradox by establishing correlations between two quantum systems that were too strong to be explained classically, this phenomenon is commonly referred to as EPR-steering <cit.>. The concept of steering involves one remote user using a pair of entangled states to influence or steer their partner's state through local measurements. As per the hierarchy of quantum correlations, steerable states are a strict superset of the states that can demonstrate Bell nonlocality and a strict subset of the entangled states <cit.>.
Quantum steering has recently received significant attention in the field of quantum information research and has been the subject of both experimental and theoretical investigations <cit.>. For example, the experimental quantum steering has been studied through the implementation of generalized entropic criteria and dimension-bounded steering inequalities, where two or three measurement setups are used on each side <cit.>. Steering game based on the all-versus-nothing criterion has been experimentally demonstrated <cit.>. The asymmetric property of EPR steering is relevant for experimental and potential applications in quantum information as a one-sided device-independent quantum key distribution <cit.>, quantum teleportation <cit.>, and optimal prepare-and-measure scenarios <cit.>. Moreover, for different quantum systems, the possibility of quantum steering is experimentally interpreted, including photon polarizations in a linear-optical setup <cit.>, Bohmian trajectories <cit.>, a family of the natural two-qubit state <cit.>, and non-Gaussian state <cit.>.
In the theoretical framework, researchers have developed asymmetric criteria of steering correlation for a pair of arbitrary continuous variables <cit.>. Additionally, Walborn et al. <cit.> have utilized the entropic uncertainty relations to express the steering inequality for arbitrary discrete observables. The violation of the Clauser-Horne-Shimony-Holt inequality has also been employed to discuss the degree of steerability <cit.>. Furthermore, some investigations have been conducted on the violation of steering inequality and its degree for various quantum systems, including a three-mode optomechanical system <cit.>, Heisenberg chain models <cit.>, two-level or three-level detectors <cit.>, and qubit-qubit as well as qubit-qutrit states <cit.>.
On the other hand, the essential conceptions of squeezed spin systems were introduced by Kitagawa and Ueda in 1993 <cit.>. The entropy squeezing for a bipartite system has been obtained for three discrete observables in N-dimensional Hilbert space and employing the discrete Shannon entropy <cit.>. The violation of two quadratures of entropy squeezing inequality represents a magnificent indicator of entanglement <cit.>. Meanwhile, the entropy squeezing of multi-qubit inside a cavity system has been considered a hot research topic, such as: two-qubit interacting with two-mode cavity field <cit.>, qutrit state in a cavity filed <cit.>, and the effect of classical field and non-linear term on the qubit-field interaction <cit.>.
Our motivation is to introduce how entropy squeezing can be employed as an indicator of the degree of steerability. Overall, as the discrete conditional Shannon entropy is used as a measure of steerability, so do the two quadratures of conditional entropy squeezing express the steering?
This paper is organized as follows. In Section <ref>, we
present the steerability based on conditional entropy squeezing. In Section <ref>, the main results of our paper are discussed in detail. Finally, the conclusion is given in Section <ref>.
§ STEERABILITY BASED ON CONDITIONAL ENTROPY SQUEEZING
In order to gain a better understanding of the relationship between entropy squeezing and normalized entropic uncertainty relation (NEUR)-steering for bipartite subsystems A and B, we can take advantage of the definition provided by Walborn et al. <cit.>. The mathematical framework of NEUR-steering inequality concerning an even N-dimensional Hilbert space along with the local hidden state for a pair of arbitrary discrete observables is expressed as <cit.>
∑_i=1^N+1H(R^B_i|R^A_i)≥N/2ln(N/2)+(1+N/2) ln(1+N/2),
where {R^A_i} and {R^B_i} are the eigenvectors of the discrete observables R̂^A and R̂^B, respectively, and N is the total number of different eigenvectors. H(R^B|R^A) ≥∑_λ P(λ) H_Q(R^B|λ) denotes the corresponding local hidden state constraint for discrete observables, which is defined by the conditional information entropy H_Q(R^B|Q) of the probability distribution P_Q(R^B|λ) with the hidden variable λ. In two-dimensional Hilbert space N =2, by employing the Pauli spin operators {σ_x,σ_y,σ_z} as measurements, the NEUR-steering from A to B is realized only if the following condition is violated <cit.>
H(σ_x^B|σ_x^A)+H(σ_y^B|σ_y^A)+H(σ_z^B|σ_z^A)≥ 2ln2,
where
H(σ_i^B|σ_i^A) = H(ρ̂_AB)_i - H(ρ̂_A)_i
= - ∑_n,m=1^2 P_i^n,mln P_i^n,m+ ∑_l^2 P_i^lln P_i^l.
Here, P_i^n,m=⟨ϕ^i_n, ϕ^i_m|ρ_AB|ϕ^i_n, ϕ^i_m ⟩ and P_i^n= ⟨ϕ^i_n |ρ_A|ϕ^i_n ⟩ are the probability distribution of an arbitrary two-qubit state ρ_AB and reduced single qubit state ρ_A, respectively, where |ϕ^i_j ⟩ represent the two possible eigenvectors (j=1,2) of σ_i, and ρ_A=Tr_B[ρ_AB].
In this paper, we assume that the density state ρ̂_AB with real components in the standard basis {|00⟩, |01⟩, |10⟩, |11⟩} can be written as
ρ̂_AB=[ ρ_11 0 0 ρ_14; 0 ρ_22 ρ_23 0; 0 ρ_23 ρ_33 0; ρ_14 0 0 ρ_44 ].
Note that the operator ρ̂_AB satisfies the common conditions ρ̂_AB≥ 0 and Tr[ρ̂_AB]=1.
By applying state (<ref>) in Eq. (<ref>) and violating NEUR-steering inequality, one can obtain
ℐ_AB= ∑_i=1^3∑_j=1^41 + x_ij/2ln (1+x_ij)
- ∑_k=1^2(1 + a_k) ln (1+a_k)≤ 2 ln 2,
where the summations in the first term are related to the three Pauli spin operators and probability distribution of the two-qubit ρ_AB, respectively, and x_ij are obtained by
x_11=x_12=-x_13=-x_14=2 (ρ_14+ρ_23),
x_21=x_22=-x_23=-x_24=2(ρ_23-ρ_14),
x_31=3ρ_11-(ρ_22+ρ_33+ρ_44),
x_32=3ρ_22-(ρ_11+ρ_33+ρ_44),
x_33=3ρ_33-(ρ_11+ρ_22+ρ_44),
x_34=3ρ_44-(ρ_11+ρ_22+ρ_33).
Likewise, the summation in the second term is related to the probability distribution of the reduced state ρ_A, and a_k is given by
a_k=(-1)^k (ρ_11+ρ_22-ρ_33-ρ_44).
However, the one-way NEUR-steering is quantified based on observable A measurements as follows <cit.>
S^A⟶ B=max{0,ℐ_AB-2ln2/ℐ_max-2ln2},
where ℐ_max= 6 ln2 when the system is prepared in Bell states.
On the other hand, if we define the function Ξ(σ_i^B|σ_i^A) = e^H(σ_i^B|σ_i^A), then the inequality (<ref>) can be reformulated as
Ξ(σ_x^B|σ_x^A) Ξ(σ_y^B|σ_y^A) ≥4/Ξ(σ_z^B|σ_z^A),
where
Ξ(σ_i^B|σ_i^A)= ∑_n,m=1^2(P_i^n,m)^P_i^n,m×∑_l=1^2(P_i^l)^P_i^l.
According to Ref. <cit.>, the fluctuations in component Ξ(σ_i^B|σ_i^A) (i=x,y) are
said to be “squeezed in entropy" if the squeezing factor E(σ_i^B|σ_i^A) satisfies the condition
E(σ^B_i|σ^A_i)=max{ 0,2/√(Ξ(σ_z^B|σ_z^A))-e^Ξ(σ_i^B|σ_i^A)},
with i=x,y. From the previous condition, we can depict the upper bounds or the lower bounds of the NEUR-steering degree. If the state is a maximum entangled state, then the upper and lower bounds in condition (<ref>) are identical. Hence, bidirectional steerability and the average of conditional entropy squeezing quadrature have similar behavior. In partially entangled states, the NEUR steerability is restricted between the upper and lower bounds. Therefore, the average of the two components of conditional entropy squeezing E(σ_i^B|σ_i^A) represents an indicator for quantum steerability. In any case, we can define the quantum steerability based on the average of entropy squeezing as
𝒵^A⟶ B=max{0,E(σ_x^B|σ_x^A)+E(σ_y^B|σ_y^A)/2},
where E(σ_x^B|σ_x^A) and E(σ_y^B|σ_y^A) are defined in Eq. (<ref>). Hereinafter, we provide a comparative study between the average of conditional entropy squeezing and one-way quantum steering for some different quantum systems.
§ SOME RESULTS AND DISCUSSION
Here, we study the relationship between one-way steering and the average conditional entropy squeezing for a class of two-qubit state, which reads
ρ̂_AB=ν |ϕ⟩⟨ϕ |+ (1-ν) |ψ⟩⟨ψ |,
where |ϕ⟩=|01⟩+|10⟩/√(2), |ψ⟩=|00⟩+|11⟩/√(2), and ν is the setting state parameter. The state (<ref>) is maximally entangled for ν=1 and ν=0, while partially entangled for ν∈(0,0.5) ∪ (0.5,1).
In Figure <ref>, we have performed a comparative analysis of the NEUR-steering and entropy squeezing for a composite system consisting of a two-qubit state represented by Eq. (<ref>). Through our analysis, we have observed interesting relationships between these two measures. Figure <ref>(a) clearly demonstrates that the extent of NEUR-steering is bounded by the two quadratures of entropy squeezing. Specifically, when the NEUR-steering is maximized, we observe that the two quadratures of entropy squeezing become identical. Conversely, when the NEUR-steering is minimized, the two quadratures of entropy squeezing are separated. This finding indicates that the values of E(σ_x^B|σ_x^A) and E(σ_y^B|σ_y^A) can be considered as upper and lower bounds of NEUR-steering, respectively. Furthermore, we have investigated the average of the two quadratures of entropy squeezing and its relation with the NEUR-steering. Figure <ref>(b) illustrates that at maximally entanglement ν=0 and ν=1, the average of entropy squeezing (𝒵^A⟶ B) aligns closely with the NEUR-steering S^A⟶ B. However, at a lower degree of steering, corresponding to a partially entangled state, we observe deviations between the behaviors of NEUR-steering and the average of entropy squeezing. Nevertheless, even in these cases, 𝒵^A⟶ B remains a reliable indicator for expressing the presence of steerability in the system.
§.§ Some Quantum Processes
In this subsection, we will compare in detail the effect of some quantum processes on the functions 𝒵^A⟶ B and S^A⟶ B, namely acceleration process, decoherence via a stochastic dephasing channel, and swapping process.
§.§.§ Acceleration Process
Let two qubits be simultaneously or separately accelerated in Rindler space. The computational basis {0,1} in this space for regions I and II can be defined as <cit.>
|0_k⟩= cos r_k |0_k⟩_I |0_k⟩_II+ sin r_k |1_k⟩_I |1_k⟩_II,
|1_k⟩= |1_k⟩_I |0_k⟩_II,
where r_k∈ [0,π/4] is the acceleration parameter of the qubit k=A,B. By substituting in the state (<ref>) and tracing over the degrees of the region II, one can get the accelerated state as
ρ̂_AB^acc = 𝒜_11 |00⟩⟨ 00| + 𝒜_22|01⟩⟨ 01|+ 𝒜_33|10⟩⟨ 10|
+ 𝒜_44 |11⟩⟨ 11| +( 𝒜_14 |00⟩⟨ 11|+𝒜_23 |10⟩⟨ 01|+h.c.),
where
𝒜_11= ν/2cos^2 r_a cos^2 r_b , 𝒜_22= cos^2 r_a (ν/2sin^2r_b +1-ν/2),
𝒜_33=cos^2 r_b (ν/2sin^2r_a + 1-ν/2),
𝒜_44= sin^2 r_a (ν/2sin^2r_b + 1-ν/2)+ 1-ν/2sin^2r_b +ν/2,
𝒜_14=ν/2cos r_a cos r_b, 𝒜_23=1-ν/2cos r_a cos r_b.
In Figure <ref>, we present an investigation into the impact of the accelerated process on a two-qubit state. Specifically, we aim to explore the relationship between the NEUR-steering and average entropy-squeezing measures under this process. We assumed that the two-qubit state is maximally entangled with ν=1 as a fixed parameter. In Figure <ref>(a), we observe that when only one qubit is accelerated with r_a=r and r_b=0, the NEUR-steering is maximized at lower values of the acceleration parameter. As the acceleration parameter increases, we see a decrease in the degree of steering. Interestingly, we note that the NEUR-steering and the entropy squeezing are identical across different values of the acceleration parameter r. This indicates a consistent relationship between these measures regardless of the acceleration applied to the system. On the other hand, when both qubits are accelerated simultaneously with r_a= r_b=r, Figure <ref>(b) reveals an intriguing trend. As the acceleration parameter increases, the rate of decrease in steering becomes more pronounced. This finding suggests that accelerating two qubits simultaneously increases the suppression of steering. However, it is important to note that despite this trend, the two measures, namely NEUR-steering and entropy squeezing, exhibit little variations with respect to the acceleration parameter.
§.§.§ Noisy Channel Process
To examine the two functions (<ref>) and (<ref>) under noisy channel models, we can express the temporal density operator in terms of Kraus operators as
ρ̂_AB(t)= ∑_i,j K_i^A(t) K_j^B(t) ρ̂_AB(0) (K_i^A (t) K_j^B (t))^†,
here ρ̂_AB(0) is defined in Eq. (<ref>), while K_i^k(t) and K_j^k(t) with k=A, B are the time-dependent Kraus operators for different noise channels. For example, we use the Kraus operators of amplitude-damping noise, which are defined by <cit.>
K_1 (t)=|0⟩⟨ 0|+√(1-P(t)) |1⟩⟨ 1|, K_2 (t)=√(P(t)) |0⟩⟨ 1|,
where P(t)= e^-g t[cos(λ t/2)+ g/λsin(λ t/2) ]^2 with λ=√( g(2γ - g)). Herein, g is a decay rate which depends on the reservoir correlation time, and γ is the coupling strength related to qubit relaxation time.
Likewise, the Kraus operators for purely dephasing noise channels can be defined as <cit.>
K_1 (t)=|0⟩⟨ 0|+P(t) |1⟩⟨ 1|, K_2(t)=√(1-P^2(t)) |1⟩⟨ 1|,
where
P(t)=exp{-γ2(t+g^-1 [exp(-g t)-1])}.
Figure <ref> presents a comprehensive examination of the impact of amplitude-damping noise on the degree of steerability, with the NEUR-steering and entropy squeezing serving as the quantifying measurements. In Figure <ref>(a), we observe that, by selecting a small value for the damping rate (g=0.01), the NEUR-steering oscillates between its maximum and lower bounds. This oscillatory behaviour is consistent with the properties of steerability under the influence of amplitude-damping noise. It is noteworthy that the two measures, NEUR-steering and entropy squeezing, coincide perfectly with the scaled time parameter. This convergence of the two measurements further reinforces their equivalence in capturing the system's dynamics. Moving to Figure <ref>(b), we examine the effect of increasing decay rates within the range of g (specifically with g=0.1 and maximally entangled state with ν=1). As the scaled time escalates γ t, we gradually observe the NEUR-steering oscillating with an increase in the upper bounds. On the other hand, in the case of a partially entangled state with a parameter value of ν=0.1, it can be observed from Figure <ref>(c) that a clear separation between the two measures over time. Furthermore, as the scaled time increases, the upper bounds of the steering degree exhibit a decrease in value. Remarkably, the maximum bounds of steering continue to expand as time progresses, indicating a growing influence of the amplitude-damping noise on the steerability of the system. Just like in the previous case, the NEUR-steering and average entropy-squeezing measurements in this scenario remain parallel, underscoring their identical nature. This consistent agreement can be attributed to the initial state being maximally entangled.
Figure <ref> provides a detailed analysis of the effect of dephasing noise on the degree of steering, utilizing the NEUR-steering and entropy squeezing as the measurement criteria. Moreover, the comparison between these two measures under the influence of the dephasing noise channel is examined. In the first, we consider an initial state that is maximally entangled with ν=1. Initially, we note that while the entropy squeezing and NEUR-steering measures display similarities in their general behavior, they are not entirely identical as the scaled time progresses. At the onset, both measures exhibit their maximum bounds. However, as time increases, we observe a notable distinction between the maximum bounds of entropy squeezing and NEUR-steering. Interestingly, the maximum bounds of entropy squeezing surpass those of NEUR-steering. This disparity suggests that the influence of dephasing noise imposes a more pronounced impact on the entropy squeezing measure compared to NEUR-steering. In the context of partial entanglement ν=0.1 with g=0.1, our observations indicate a rapid decay of steering and a decrease in the upper bounds of steering during the initial stages of the interaction. This suggests that partial entanglement has a significant impact on the dynamics of steering. Furthermore, it is important to note that the time evolution of the steered system is influenced by the degree of entanglement. As the entanglement decreases, the decay of steering becomes more pronounced, implying a decreasing ability to remotely control and influence the entangled particles.
Notably, despite the difference in the maximum bounds between the two measures, they still portray parallel trends. As the scaled time continues to grow, we witness a decrease in the degrees of steering for both measures. This observation implies that the detrimental effects of dephasing noise manifest as a reduction in the correlation between the entangled subsystems. Besides, it is evident that increasing the damping rate of the dephasing channel exacerbates this decreasing effect on the steering degrees.
§.§.§ Swapping Process
Let us consider two different sources, S_12 and S_34, which generate pairs of two-qubit state ρ_12 and ρ_34, respectively. Qubits 1 and 4 are far apart, while qubits 3 and 2 remain close. The swapping process is aimed to measure the amount of quantum NEUR-steering between qubits 1 and 4 by performing a joint Bell measurement on qubits 2 and 3. The post-measurement state ρ_14 is calculated by <cit.>
ρ_14=Tr_23[M_i. ρ_1234. M_i^†/Tr[M_i. ρ_1234. M_i^†]],
where ρ_1234=ρ_AB⊗ρ_AB, such that the two sources generate the state ρ_AB defined in Eq. (<ref>). Moreover, M_i = I_2 ⊗|Φ_i⟩⟨Φ_i|⊗ I_2 and |Φ_i⟩ stand for the usual four Bell states.
Finally, Figure <ref> focuses on analyzing the effects of the swapping process on the behavior of NEUR-steering and conditional entropy squeezing concerning the state parameter ν. The post-measurement state ρ_14 at |Φ_i⟩=|ψ⟩ is studied to understand how the swapping process influences the behavior of the quantum system. The findings indicate that the steerability degree significantly decreases when the two-qubit state is initially in a partially entangled state. Furthermore, when compared to Figure <ref>, it is noticeable that the unsteerable region expands during the swapping process. However, when the two-qubit state is maximally entangled, the two measures are equal. Hence, conditional entropy squeezing serves as an excellent indicator of the level of NEUR-steering present under this process.
§ CONCLUSION
We have proposed a new method for quantifying one-way quantum NEUR-steering in an arbitrary two-qubit system using the average of conditional entropy squeezing. We derived the explicit analytical expressions of NEUR-steering and conditional entropy squeezing. A comparative analysis of the two measures was conducted on a free maximally mixed two-qubit state, either restricted two-qubit state by using acceleration, noisy channels, or swapping processes.
For the free maximally mixed two-qubit state, our results highlight the interconnectedness between NEUR-steering and entropy squeezing. We demonstrated that the quadratures of entropy squeezing serve as bounds for NEUR-steering, which represented the upper and lower limits. Additionally, we established the average of entropy squeezing as a valuable indicator of steerability, particularly at a maximally entangled state.
The effects of the accelerated process on a two-qubit state have been demonstrated. We observed that the behavior of NEUR-steering and entropy squeezing depends on whether one or both qubits are accelerated. In the former case, the degree of NEUR-steering decreases with increasing acceleration, while in the latter case, the rate of decrease in steering is amplified. Nonetheless, despite these variations, the measures of NEUR-steering and entropy squeezing remain stable and invariant with respect to the acceleration parameter.
Under the amplitude-damping noise, our results showed that under a specific small damping rate, the NEUR-steering experiences oscillatory behavior. By allowing the decay rate to increase, the NEUR-steering experiences oscillatory behavior with expanding boundaries. Notably, the NEUR-steering and entropy-squeezing measurements remained indistinguishable throughout these processes, further validating their correlation. On the other hand, the effect of dephasing noise on the degree of steering has been evaluated. While the two measures differ in terms of their maximum bounds, they exhibit similar overall trends. Specifically, as the scaled time increases, both measures demonstrate a decrease in the degree of steering. This decreasing effect is amplified by enhancing the damping rate of the dephasing channel.
Finally, the swapping process significantly diminishes the steerability degree when the initial state of the two-qubit is partially entangled. For maximally entangled states, the two measures coincide.
In conclusion, it is evident that entropy squeezing serves as a measure of steering primarily in the context of a maximally entangled system. However, when considering partially entangled states or situations where entanglement is constrained by external factors, entropy squeezing remains a highly reliable indicator of steering. By assessing the degree of entropy squeezing in such scenarios, valuable insights can be gained regarding the presence and extent of quantum steering. Entropy squeezing is a tool for indicating and understanding quantum steering, even in cases where maximal entanglement may not be achieved.
Data availability
All data generated during this study are included in this paper.
Competing interests
The authors declare no competing interests.
ORCID iDs
A-S. F. Obada
https://orcid.org/0000-0001-5862-7365https://orcid.org/0000-0001-5862-7365
M. Y. Abd-Rabbou
https://orcid.org/0000-0003-3197-4724https://orcid.org/0000-0003-3197-4724
Saeed Haddadi
https://orcid.org/0000-0002-1596-0763https://orcid.org/0000-0002-1596-0763
unsrt
|
http://arxiv.org/abs/2307.04308v1 | 20230710022738 | CT-BERT: Learning Better Tabular Representations Through Cross-Table Pre-training | [
"Chao Ye",
"Guoshan Lu",
"Haobo Wang",
"Liyao Li",
"Sai Wu",
"Gang Chen",
"Junbo Zhao"
] | cs.LG | [
"cs.LG"
] |
[1]
Zhejiang University
Hangzhou
China
[email protected]
Chao Ye and Guoshan Lu are co-first authors of the article.
Zhejiang University
Hangzhou
China
[email protected]
Zhejiang University
Hangzhou
China
[email protected]
Zhejiang University
Hangzhou
China
[email protected]
Zhejiang University
Hangzhou
China
[email protected]
Zhejiang University
Hangzhou
China
[email protected]
Junbo Zhao is the corresponding author.
Zhejiang University
Hangzhou
China
[email protected]
Tabular data — also known as structured data — is one of the most common data forms in existence, thanks to the stable development and scaled deployment of database systems in the last few decades.
At present however, despite the blast brought by large pre-trained models in other domains such as ChatGPT <cit.> or SAM <cit.>, how can we extract common knowledge across tables at a scale that may eventually lead to generalizable representation for tabular data remains a full blank.
Indeed, there have been a few works around this topic. Most (if not all) of them are limited in the scope of a single table or fixed form of a schema.
In this work, we first identify the crucial research challenges behind tabular data pre-training, particularly towards the cross-table scenario.
We position the contribution of this work in two folds:
(i)-we collect and curate nearly 2k high-quality tabular datasets, each of which is guaranteed to possess clear semantics, clean labels, and other necessary meta information.
(ii)-we propose a novel framework that allows cross-table pre-training dubbed as .
Noticeably, in light of pioneering the scaled cross-table training, CT-BERT is fully compatible with both supervised and self-supervised schemes, where the specific instantiation of is very much dependent on the downstream tasks.
We further propose and implement a contrastive-learning-based and masked table modeling (MTM) objective into CT-BERT, that is inspired from computer vision and natural language processing communities but sophistically tailored to tables.
The extensive empirical results on 15 datasets demonstrate 's state-of-the-art performance, where both its supervised and self-supervised setups significantly outperform the prior approaches.
CT-BERT: Learning Better Tabular Representations Through Cross-Table Pre-training
Junbo Zhao
=================================================================================
§ INTRODUCTION
With the extensive application of database management systems and the vigorous development of the internet industry, tabular data — also known as structured data — truly abounds.
Indeed, the accumulation of scaled tables stored in databases has brought significant value to the industry or individuals, through tech stacks like data mining or the development of OLAP databases.
Notably, over the past decade, various large-scale collections of tabular datasets have been proposed <cit.>, and they were used for tasks like tableQA <cit.>, table interpretation <cit.>, table expansion <cit.>, etc.
Despite that, how to enable a large-scale, distributed, and cross-table pre-training very much remains untapped.
This, unfortunately, is in stark contrast to the other communities such as computer vision and natural language processing.
In both of these domains, techniques like pre-training followed by fine-tuning have long established a dominant methodological status, such as BERT <cit.>, CLip <cit.>, ChatGPT <cit.>, GPT4 <cit.>, SAM <cit.>, etc.
In hindsight, the successes of these large-scale models lie in their ability to extract common semantic structure from the seen/unseen input and condense this knowledge/common sense into a vectorial representation.
The emergence of this capacity stems from a scaled pre-training process on a gigantic amount of text or vision data across the domains.
Recently, a few works have attempted to learn contextualized representation from tabular data through neural networks, or more specifically the transformer model <cit.>, such as TabTransformer <cit.>, VIME <cit.>, TabNet <cit.>, SAINT <cit.>, etc.
While the concept is truly promising, these approaches are limited to single-table training with a fixed form of a schema.
Most closely related to our work are TransTab <cit.> and PTab <cit.>. Both approaches note the importance of cross-table learning. However, they process the table to a proximal form of text data, for instance by converting a sample row in the table into a sentence, without doing much adaption specifically to the structured data.
This weakened coupling of the data values in the tables with the schema/meta/column names has arguably obstructed these approaches to scale and absorb common knowledge.
§.§ Challenges
In what follows, we identify the core challenges that remained in scaled and cross-table pre-training.
C1. How can pre-training models accept inputs from heterogeneous tables as there are significant differences between different tables? For instance, the feature value "apple" appears under the column names "fruit" and "My_Laptop" in two different tables, conveying completely different meanings.
C2.
Unlike image or text data where the pixels and word/character tokens are ordered, arbitrarily permuting any tables' rows or columns does not change its semantic meaning. We dub this property as permutation invariance uniquely to tabular data.
Thus, how can the pre-training mechanism be compatible with this nature of tabular data?
C3. Still driven by the difference against common vision or text data, how to design a suitable cross-table pre-training task objective because there is no obvious context or spatial structure in the tabular data?
§.§ Key Idea behind CT-BERT
Ideally, in order for the pre-trained model to properly acquire the common knowledge from multiple heterogeneous tables, the model should be encouraged to learn the innate similarities or dissimilarities among the tabular data distribution.
However, as we posited in the challenges, directly utilizing the original form of the data (or its corresponding embedding) may cause unentangleable confusion.
Let us give a concrete example; given two tables with similar schema, two forms “10 meters" and “10 kg" are iconically identical. Despite that, directly converting them to embedding may inherently confuse and adversely impact the convergence or training difficulty.
Abstracting away from this example, to cope with this challenge, the pre-training methodology must be capable to conform the different metrical systems or different notations.
It is true that we can write heuristic rules to tackle this problem, but the amount of it would be surely insurmountable.
In that regard, we outline the core idea behind CT-BERT.
In a nutshell, provided with any table, it can always be decomposed to feature that denotes the data curated column-wise, together with token drawn from the schema information such as the column name or other textual meta-information.
Instead of following a normal embedding-based encoding approach, we proactively combine the feature with the token information, by casting them into a form of textual representation.
For example, we convert the feature value "apple" combined with the schema information to "fruit is apple", which we dub as a phrase, as the atomic representation of the cell value in tabular data. This allows to distinguish the same feature value "apple" in column "fruit" and "My_Laptop" respectively.
We postulate that this manifests several merits. In particular, the challenge C1 can be both theoretically and empirically solved, and this formation is rid of many heuristic rules, except the template for sticking the feature and token together.
§.§ Our Methodology:
Essentially, CT-BERT bases itself upon the phrase as the atomic representation of each unit in any provided table, in combination of the feature (column name/meta) with the feature value.
We then process each atomic element similarly to word embedding in NLP.
Towards the challenge C2 of the permutation invariance property, we propose a novel transformer <cit.> encoding architecture that is adapted to cater to this nature of tabular data.
As a pioneer work to enable cross-table pre-training, we devise CT-BERT to be compatible with both supervised and self-supervised scenarios.
In that regard, we profoundly categorized the available tables drawn from databases by a standard whether there exists a clear label column or not, that we direct it to supervised and self-supervised learning paradigms respectively.
On one hand, for supervised learning, we propose a supervised contrastive learning-based objective to better cluster samples with the same label while allowing different labels to be uniformly distributed over the hypersphere of tabular representations.
On the other hand, in order to take advantage of large-scale unsupervised data, we propose another pre-training method of masked table modeling (which we call MTM) — adapted from the MLM objective in the NLP community <cit.> — which facilitates to mask some features in the atomic then let the model predict the recovery (for challenge C3).
We believe that if the model can predict the masked features from the retained features, then the model can learn the underlying relationship between the features.
Similar to CV or NLP, this relationship serves as the foundation to manifest the shareable knowledge that is migrated across tables.
§.§ Contributions
To wrap up, the contribution of this article is deemed two-fold.
For one thing, we collect and curate nearly 2,000 tabular datasets, each of which is guaranteed to possess clear semantics, clean labels, and other necessary meta information. We treat these high-quality and labeled datasets as the foundation to launch large-scale pre-training.
For another, we propose a generic and efficient cross-table pre-training solution, dubbed as Cross-Table pre-Training framework ().
CT-BERT promotes several novel development bullets including but not limited to: (i)-a novel paradigm compatible with both supervised and self-supervised objectives, (ii)-a contrastive learning and masked table modeling (MTM) objectives for pre-training tables, and a novel transformer architecture tailored to the permutation invariance nature of tabular data. Our pre-trained tabular model can support fine-tuning or few-shot learning for prediction on tables of any shape.
The remainder of the paper is organized as follows. In Section <ref>, we detail the table pre-training dataset we contributed. In Section <ref>, we present the proposed cross-table pre-training framework. In Section <ref>, we constructed extensive experiments to evaluate the effectiveness and superiority of .
§ RELATED WORKS
We provide a brief background on representation learning, models for tabular data, and self-supervised pre-training methods.
§.§ Representation Learning
In recent years, with the development of pre-trained large language models ("LLMs") like GPT-3 <cit.>, the pre-training then fine-tuning and prompting paradigms have attracted attention. These methods typically train models with self-supervised representation learning methods from large-scale unstructured text and structured knowledge bases, and then fine-tune them or use them for various downstream tasks. In early work in natural language, including Word2Vec <cit.> and GloVe <cit.>, pre-training distributed representations of words provided significant improvements over randomly initialized parameters. However, these methods cannot simulate the use of words in different linguistic contexts. This dilemma has prompted the development of vocabulary representations that can learn context and contextual relationships <cit.>, and these pre-trained language models have achieved tremendous success and produced state-of-the-art results in various NLP tasks <cit.>. Similarly, self-supervised representation learning can also be used for tabular data, such as knowledge bases (KB) and databases, where entities and relationships in the KB can be embedded into continuous vector spaces and then utilized for various downstream tasks, such as KB completion <cit.>, relation extraction <cit.>, entity resolution <cit.>, etc. Although representation learning on the text and KB has been successful, few works have explored directly learning self-supervised representations on large-scale tabular data for tabular modeling. In this work, we introduce , which is the first method for self-supervised pre-training on large-scale tabular data, and the pre-trained model can be fine-tuned for various downstream tabular prediction tasks.
§.§ Models for Tabular Data
For a long time, traditional machine learning (ML) methods such as tree-based methods <cit.> have dominated this field and have been the preferred choice for most practitioners and data mining competitions (e.g., Kaggle) <cit.>. Recently, many researchers have proposed new neural network-based architectures <cit.> to model tabular data, attempting to challenge the dominance of tree-based models in this field. For example, TabNet <cit.> uses sequential attention to simulate the process of tree decision-making, TabTransformer <cit.> leverages transformers <cit.> to learn categorical features in tables, and AutoInt <cit.> utilizes attention mechanism <cit.> to model the relationship between user and item features in click-through rate prediction tasks. However, only very few of these neural network-based models work <cit.> attempt to investigate how to handle heterogeneous tabular inputs. This leads to the advantage that deep learning methods can be pre-trained on large-scale datasets that cannot be fully exploited. As described in Section <ref> and <ref>, our proposed not only can accept inputs from heterogeneous tables but also achieves permutation invariance of feature columns, and leverage semantic knowledge from table headers and textual features. These advancements pave the way for to be pre-trained on large-scale datasets for cross-table prediction.
§.§ Self-supervised pre-training
One of the key reasons for the great success of deep learning in computer vision and natural language processing is that knowledge on a large amount of unlabeled datasets is learned through a self-supervised pre-training task and then generalized to downstream tasks through fine-tuning. For instance, masked language modeling (MLM) self-supervised pre-text task <cit.> is employed to learn contextual relationships in natural language processing. In computer vision, masked image modeling (MIM) <cit.> and contrastive learning <cit.> have been used to train powerful image representations. Some studies have attempted to extend the success of self-supervised learning to tabular data. These approaches can be roughly categorized into three types: 1) reconstruction of masked inputs <cit.>; 2) contrastive learning similar to that in SimCLR <cit.>; 3) a combination of the first two. For example, VIME <cit.> utilizes autoencoders to reconstruct corrupted table inputs. SCARF <cit.> randomly selects and replaces certain features with corresponding empirical marginal distributions to construct different views of the same sample. We argue that contrastive learning methods similar to that in SCARF <cit.> are not applicable to large-scale unlabeled cross-table pre-training tasks. Assuming the existence of a priori true labels for these unlabeled samples, such contrastive learning methods are highly likely to distance samples with the same labels, especially for tables with unrich sample labels. We are more inclined to believe that methods like masked language modeling (MLM) and masked image modeling (MIM) have greater potential. Therefore, in this work, for the first time, we formalize this series of approaches as masked table modeling (MTM) tasks. Additionally, we propose a novel masked table modeling method that combines semantic cues from table headers, which is more suitable for learning cross-table knowledge.
§ PRELIMINARY
§.§ Problem Formulation
For a given tabular data D=(𝐱_i,y_i)_i=1^n where n refers to the number of samples. 𝐱_i={𝐱_i^cat, 𝐱_i^num} where 𝐱_i^cat={x_i^1, x_i^2, … ,x_i^a} denotes all a categorical features, and 𝐱_i^num∈ℝ^b denotes all b numerical features. y_i∈{1, 2, … , T} where T refers to the total classes of labels. All samples share the same table header descriptions (column names) 𝐂={c^1, c^2, …, c^a+b}. Our goal is to find the best possible prediction function f_θ to model the mapping between features and labels:
f_θ(𝐱_i; 𝐂) = y_i,
where θ refers to all trainable parameters of the function f.
§.§ Pre-training then Fine-tuning Paradigm in Tabular Domain
Given a generic architecture, often called a backbone such as Transformer, and projection head for mapping to specific tasks, the model is first pre-trained on a large dataset by self-supervised or unsupervised tasks (e.g., Contrastive Learning or MLM). The individual feature columns of the dataset {𝐱^cat, 𝐱^num} are converted to the input format 𝐱_i={𝐞^CLS, 𝐞^1, 𝐞^2 ...,𝐞^a+b}, which is sent to the Transformer model, and the model is further optimized using self-supervised or unsupervised objectives.
Then, in the downstream task-specific fine-tuning stage, the pre-trained backbone module is retained, the pre-trained projection head is discarded and the classification head for the new task is constructed, and the output 𝐞^CLS is used for multi-classification and optimized via cross-entropy loss <cit.>, etc.
§ : A LARGE-SCALE SEMANTIC TABULAR DATABASE
In recent years, the field of cross-table pre-training has been relatively underexplored. One major challenge lies in the lack of a clean and high-quality tabular dataset. Just as the proposal of ImageNet <cit.> has greatly propelled the advancement of computer vision representation learning and influenced various other domains, such as self-supervised learning and transfer learning, a similar catalyst is needed for the domain of tabular representation learning. Therefore, in this work we contribute a large-scale semantic tabular database, which we called , to better train our . is a large-scale tabular database with high quality built on various public tabular dataset websites and through our strict data cleaning. These tabular datasets are collected from OpenML[https://www.openml.org/], UCI[https://archive.ics.uci.edu/datasets], CATALOG[https://catalog.data.gov/dataset], and Kaggle[https://www.kaggle.com/]. We have open-sourced [https://drive.google.com/file/d/1-2m1tyejUV5_bZduqZw1ZXS1BUSkhzVl/view?usp=drive_link] and hope to facilitate future research in the field of tabular representation learning.
With the advent of the Big Data era, the proliferation of database technologies has led to an explosion of tabular data on the Internet. These numerous tabular datasets can help more complex and powerful models and algorithms to learn more general tabular representations. And representations are the standard signal linking many machine learning applications in this day and age. This means that more novel AI techniques can be made accessible to databases, such as allowing large language models (e.g., ChatGPT <cit.>) to understand databases. However, the quality of tables in Internet databases is inconsistent greatly, which can significantly impact the learning performance of models. For example, column name information in some tabular datasets is usually anonymized or unclear to avoid compromising privacy (e.g., named f1, f2, etc.), which may lose important semantic knowledge to better understand the tabular data. In addition to this, some tabular datasets also suffer from too many missing values, redundant feature columns, lack of consistent formatting, etc. Therefore, in this work, we spent a lot of time filtering and cleaning the tabular data from the Internet database. Specifically for each table, our data cleaning includes the following steps:
(1) Check the semantic degree of the column names for each feature. For example, the column names {user_age, weight, monthly_income} have high semantic information, while the column names {f1, f2, xyz} have almost no semantic information. We compute the cumulative semantic relevance score for each table. In our cleaning protocol, we discard such tables that have less than 50% of the features having actual semantic information in the column names.
(2) Check the missing values. For example, the datasets with more than 40% missing values are discarded. Because too many missing values can easily lead to biased or inaccurate results. For the retained tables, we fill the missing values with the plural of the corresponding column.
(3) For categorical features in the tables, we aim to restore them to their original textual values. As for numerical features, we employ min-max normalization. This is done to mitigate the impact of inconsistent measurement units across different tables (e.g., kilograms vs. grams).
(4) For the table with labels and more than 100 features, feature filtering based on Random Forest importance <cit.> is performed, and the features with lower importance ranking are discarded.
At present, has contained about 17G datasets, including approximately 1000 labeled datasets and 1000 unlabeled datasets. Usually high-quality and semantically rich labeled datasets are more difficult to obtain, while unlabeled tabular datasets are easier to obtain.
Therefore, in supervised pre-training, the theoretical upper bound of model performance is expected to be influenced by the quantity of available labeled tabular datasets at the data level. In contrast, self-supervised pre-training has the potential for a higher upper bound of performance. According to what is suggested in previous research <cit.>, contrastive learning will not be adapted to tables that are not rich in label classed due tothe differences and labels of the samples being more relevant the chances of sampling negative samples are low, which is why we propose a novel self-supervised masked table modeling (MTM) pre-training approach.
We believe that the contrastive learning-based pre-training approach will be more suitable for lightweight labeled scenarios, and the upper limit of the model will be determined by the number of its available tabular datasets. On the other hand, the self-supervised pre-training approach may require a large amount of data for model training and would also theoretically have more room for improvement.
§ METHODS
Previously proposed table pre-training methods <cit.> have all been pre-trained on an individual tabular task dataset. As a result, these pre-trained models exhibit notably poor generalization performance on downstream tasks involving other tables. In this section, we detail our proposed novel cross-table pre-training framework , which improves the generalization ability of pre-trained models by learning shareable knowledge across different tables. The overall architecture is provided in Figure <ref>.
As we have discussed before, cross-table pre-training needs to address three key challenges C1-C3.
For C1, in Section <ref> we propose to use a natural language-like approach to process the input of heterogeneous tables and enhance cross-table transfer learning by leveraging semantic knowledge in the schema. For C2, in Section <ref> we use an adapted transformer encoder <cit.> without positional encoding to model feature-level interactions. For C3, in Section <ref> we propose a novel masked table modeling (MTM) self-supervised pre-training task for large-scale unlabeled dataset scenarios and a contrastive learning-based supervised pre-training task for lightweight labeled dataset scenarios, respectively. At last, in Section <ref> we introduce fine-tuning the pre-trained model on downstream tasks.
§.§ Input Processor on Heterogeneous Tables
Feature columns among tables from diverse domains often exhibit significant variations. Therefore the previous works <cit.> often use the table-specific feature extractor which is also called "feature tokenizer" in their literature. This greatly hinders the model to perform cross-table learning. In , we analyze that the table is essentially a multimodal structured data, which contains both text (e.g., column names and discrete categorical values) and continuous values. Based on this observation, we use a natural language-like approach and combine the column name schema information to convert all features into a uniformly formatted feature phrase, e.g. [column name] is [value]. This design has two advantages. First, our model can accept inputs from heterogeneous tables without any table-specific operation. This serves as a necessary condition for enabling cross-table pre-training. Second, the knowledge learned from pre-training can be maximized to transfer between similar features by semantic information in the schema across different tables. For example, gender features are recorded in both tables. In one table, the column name is gender and the value is "male", and in the other table, the column name is "sex" and the value is "man". Our model can encode the two feature phrases "gender is male" and "sex is man" into two distance proximity embeddings (e.g., cosine similarity is high) based on semantic information.
For each feature phrase, we convert it into a low-dimensional embedding and employ it to model the feature interaction in the subsequent phase. The right part of Figure <ref> illustrates the details about how we handle the categorical and numerical features separately to get the feature embedding.
Categorical Feature. For each sample x_i, each discrete category will have a corresponding text description (e.g., 1 for a man, 2 for a woman). We concatenate the column name and the original categorical description to form a feature phrase. Then, we use a pre-trained BERT <cit.> model to tokenize the phrase and generate the corresponding embedding for each token, where the pre-trained BERT model contains generic semantic knowledge. Further, we pool these token embeddings of the j-th feature into one feature embedding 𝐞_i^j∈𝐑^d. In our experiments, we tried average, self-attention <cit.> and other pooling methods. See Section <ref> for ablation experiments on these pooling strategies. Among them, the average pooling strategy performs the best. Therefore, without a special explanation, average pooling is used by default.
Numerical Feature. We know that at least for now pre-training token embedding of continuous values is ineffective <cit.>. For numerical features, we similarly process their column names as for categorical features to obtain the header embedding 𝐜^𝐣∈𝐑^d. Then we multiply the normalized numerical value with the corresponding header embedding to get the feature embedding𝐞_i^j=x_i^j×𝐜^𝐣∈𝐑^d. Note that the normalization of the numerical values is important here, as it helps the knowledge to transfer better across different tables. Because the same numerical features may have different measurement units across different tables. For example, the unit of height in one table is a meter, but in another table may be a centimeter.
We note that previous works <cit.> have also tried to combine column names to convert each sample into a sequence of text tokens and the subsequent learning is built on the token-level. We think that such token-level interactions are more suitable for extracting textual semantic information from tables (e.g., TableQA task <cit.>), but are not well-suited for our target column prediction task. For example, in a "work" column with the value "associate professor" in a table, this feature will first be converted into three token embeddings: [work], [associate] and [professor]. The subsequent model will learn the relationship between [associate] token and [professor] token in the same column, which is unreasonable. The experimental results in Section <ref> also validate this observation. However, in our design, one column corresponds to one feature embedding, and the subsequent model learns at the feature-level. This is a straightforward but effective enhancement. At the same time, for tables with a large number of features, such a design can optimize computational efficiency and memory space usage.
§.§ Feature Interaction
There is no inherent order relationship among different columns in a table. In other words, tables possess permutation invariance in the column dimension. Previous tabular modeling works <cit.> often overlooked this aspect by directly employing the transformer architecture <cit.>. Therefore, we have made certain modifications to the standard transformer encoder to adapt it to tabular data. Specifically, we 1) discard positional encoding and 2) use a shared-parameter fully connected feed-forward network at each transformer encoder block. Finally, our adapted transformer encoder block contains two sub-layers: a multi-head self-attention layer, and a shared-parameter fully connected feed-forward layer. In addition, a residual connection <cit.> is done for each sub-layer, followed by layer normalization <cit.>. The multi-headed self-attentive mechanism is the key to modeling feature interactions. It learns the relationship between features through Query, Key, and Value matrices. It is calculated as follows:
MultiHead(𝐇^l) = Concat(head_1, …, head_i, …, head_h))𝐖^O,
head_i = Attention(𝐇^l𝐖_i^Q,𝐇^l𝐖_i^K,𝐇^l𝐖_i^V),
Attention(𝐐,𝐊,𝐕)=Softmax(𝐐𝐊^T/√(d))𝐕,
where 𝐇^l∈ℝ^n × d is the input of the l-th layer; 𝐖^O∈ℝ^d × d is parameter matrix; 𝐖_i^Q, 𝐖_i^K and 𝐖_i^V ∈ℝ^d × d_head. d_head=d/h is the dimension of each attention head. Inspired by BERT <cit.>, we add a special classification token (𝐞^CLS∈ℝ^d) to the first position of the input sequence in each layer. This special token is used as the aggregate sample representation and is then served for the subsequent pre-training and downstream tasks. As described in Section <ref>, we can obtain the processed feature embeddings 𝐄={𝐞^1, 𝐞^2, …, 𝐞^a+b} from the raw tabular data . So we have the first layer of input 𝐇^0=[𝐞^CLS, 𝐄]. Finally, we can model the higher-order feature interactions step by step through the following calculation:
𝐇^l+1=LayerNorm(𝐇̂+linear(𝐇̂)),
𝐇̂=LayerNorm(𝐇^l+MultiHead(𝐇^l)).
§.§ Pre-training Across the Tables
Our work is the first to explore large-scale cross-table pre-training. Supervised and self-supervised pre-training are two major approaches in the field of deep learning. As described in Section <ref>, we contributed a cross-table pre-training dataset which is collected from various domains and includes approximately 1000 labeled tables and 1000 unlabeled tables. In this work, based on the nature of the collected dataset , we simultaneously explore supervised and self-supervised cross-table pre-training approaches. Firstly, for the relatively more easily learnable labeled tabular datasets, we propose a randomly subsampled supervised contrastive learning approach to adapt to the cross-table pre-training task. Secondly, for large-scale unlabeled tabular datasets, some studies have discussed the limitations of contrastive learning-based methods in unlabeled tabular scenarios <cit.>. So in order to fully leverage the potential of shareable knowledge within unlabeled tabular data, in , we propose a novel masked table modeling (MTM) self-supervised cross-table pre-training method.
Details of the two cross-table pre-training approaches are as follows:
Supervised contrastive learning. In the labeled tabular scenario, we observe that samples with the same labels tend to have similar feature sets. Based on this observation we make a bold hypothesis: powerful representation should model the invariant factors of feature sets with the same label. We, therefore, propose a random overlapping subsampling method to construct positive and negative samples in contrastive learning.
Figure <ref> illustrates how we randomly sample subsets and divide positive and negative pairs. Specifically, for each row (𝐱_i,y_i) we randomly sample k feature subsets {𝐬_i^1, 𝐬_i^2, …, 𝐬_i^k} and set all their labels to y_i. There will be a partial overlap of features between subsets. In this way, feature subsets with the same label form positive pairs, and subsets with different labels form negative pairs. Overall contrastive loss is:
ℒ_pretrain^CL(𝐗,𝐲)=1/| B |∑_i∈ B1/| P(i) |∑_p ∈ P(i)Ψ(𝐳_i^CLS,𝐳_p^CLS),
Ψ(𝐳_i^CLS,𝐳_p^CLS)=-log(exp(sim(𝐳_i^CLS,𝐳_p^CLS)/τ)/∑_i'∈ Bexp(sim(𝐳_i^CLS,𝐳_i'^CLS)/τ)),
where B is the set of samples in a batch; P(i)={p|p∈ B, p≠ i, y_i=y_p}. The previous tabular contrastive learning work SCARF <cit.> focused only on constructing different views of the same samples, simply treating all different samples as negative pairs. This only applies when the sample label classes are very rich such that the sample labels in a batch are almost all different. Compared to the tabular vertical fixed-partitioned contrastive learning method <cit.>, our method can learn more robust sample representations in richer feature subsets by random sampling.
Self-supervised MTM. For large-scale unlabeled scenarios, we propose a novel masked table modeling (MTM) self-supervised cross-table pre-training task. On each sample row in all tables, we mask some percentage of features, and then reconstruct them based on the retained features.
We argue that if the model is able to successfully reconstruct the masked features from the retained features, then the model is able to learn the underlying relationships between features that can be transferred as shareable knowledge between different tables with similar feature columns, which will eventually indirectly bring closer the representations of samples with similar feature relationships.
The middle part of Figure <ref> shows the overview of our self-supervised MTM pre-training method, which can be divided into three steps.
First step we select the features that are masked. Given an input table, we first convert all features of each sample into feature embeddings, as described in Section <ref>. Then we mask approximately p^mask features for each row (p^mask is set to 35% in our experiments and further ablation results are shown in Section <ref>). Specifically, we generate a binary mask vector 𝐦 = [m^1, m^2, …, m^a+b]∈{0, 1}^a+b where m_j is randomly sampled from a Bernoulli distribution with probability p^mask. The "1" in 𝐦 indicates a masked feature and "0" indicates keeping the original feature.
Second step we replace the masked features with a shared, learnable vector 𝐞^mask∈ℝ^d, which is also called mask token. Note that here we will add additional header embedding, which is obtained by pooling the text token embeddings of the corresponding column name, for each mask token. Because there is no order relationship between the columns in tables. Here the role of header embeddings is like the position embeddings in masked language modeling (MLM) <cit.> and masked image modeling (MIM) <cit.> tasks.
Third step we reconstruct these masked features. We feed the masked sample row 𝐱={𝐞^j|m^j=0}∪{𝐞^mask+𝐜^j|m^j=1} into the L-layer transformer encoder to get the encoded representations 𝐇={𝐡^𝐣}_j=1^a+b. For the masked numerical features, we pass it through a numerical projection matrix 𝐌_pro^num∈ℝ^d× 1 and then calculate the mean square error loss with the original feature values. For the masked categorical features, we pass it through a categorical projection matrix 𝐌_pro^cat∈ℝ^d× d and then compute the cosine similarity with the original feature embedding 𝐞_j. Here the feature embedding 𝐞^j is calculated in the same way as section <ref> but with the column names removed. We formulate the masked table modeling pre-training loss as follows:
ℒ_pretrain^mask(𝐗)= 1/| B |∑_i∈ BΦ(𝐱_𝐢,𝐞_𝐢,𝐳_𝐢),
Φ(𝐱_𝐢,𝐞_𝐢,𝐳_𝐢) = 1/N^num∑_j=1^N^num(x_i^j-z_i^j)^2 + 1/N^cat∑_j'=1^N^cat(1-sim(𝐞_i^j', 𝐳_i^j'))
where B is the set of samples in a batch; z_i^j=𝐡_i^j𝐌_pro^num; 𝐳_𝐢^𝐣'=𝐡_i^j'𝐌_pro^cat; N^num refers to the number of numerical features; N^cat refers to the number of categorical features. We do not compute the traditional cross-entropy loss for categorical features because the same category in the same feature column may be inconsistently labeled in different tables, which can lead to confusion when cross-table pre-training. For example, for the "gender" column, one table may have "man" corresponding to label "1" and "woman" corresponding to label "2", while another table might be the exact opposite, with "man" corresponding to label "2" and "woman" corresponding to label "1".
Rather than a completely random mask strategy, we think that the proportion of numerical and categorical features masked can be adjusted according to the downstream task. When the downstream scenario is a regression task, the model needs to predict a continuous value. In this case, the pre-training task to predict the masked numerical features will be more helpful. Similarly, for classification downstream tasks, it will be biased to mask more categorical features. The downstream tasks in our experiment are mainly classification prediction, so we set the mask ratio of categorical features and numerical features to 7:3 during pre-training. The overall mask rate is 35%.
§.§ Fine-Tuning on Downstream Tabular Tasks
After cross-table pre-training, we discarded the original projection header and added a new task layer on the Transformer encoder. We then fine-tuned the parameters on the downstream task datasets. The downstream scenario in our experiments is mainly classification prediction tasks. So, we employ a simple linear classifier as the task layer. We use softmax <cit.> to calculate the probability of each label category and use cross-entropy loss as our empirical supervised loss.
ℒ_task(𝐗,𝐲)=-1/N∑_i=1^N∑_j=1^Ty_ijlog(f_θ(𝐱_i)),
where label y_i uses one-hot encoding; T is the total number of all label categories.
§ EXPERIMENTS
In this section, we evaluate the effectiveness and superiority of on several benchmark tabular datasets. Specifically, we conducted extensive experiments to demonstrate the following two points:
* How does our backbone, which can accept heterogeneous table inputs, compare with the current state-of-the-art tabular neural network framework when faced with a fixed single table downstream task without pre-training?
* (key) Our large-scale cross-table pre-training can help improve the effectiveness of downstream tasks by self-supervised masked table modeling pre-training in large-scale unlabeled scenarios and supervised contrastive learning pre-training in lightweight labeled scenarios, respectively.
§.§ Experimental Setup
§.§.§ Datasets
The experimental dataset consists of two parts: upstream large-scale cross-table pre-training datasets and downstream tabular tasks for evaluating the effectiveness of our model and pre-training.
Large-scale cross-table pre-training dataset: We collected more than 2000 high-quality datasets with semantic information of column names and performed some data cleaning, including 1000 labeled datasets and 1000 unlabeled datasets. We call this dataset and describe it in detail in Section <ref>.
Public downstream tabular tasks: We selected 15 common and high-quality tabular datasets from OpenML-CC18 <cit.> to evaluate the effectiveness of our model and pre-training method. These downstream datasets contain both binary and multi-class classification tasks. We included the details and source of each dataset in Table <ref> & <ref> in the Appendix <ref>.
§.§.§ Competing Methods
We conduct experiments on the following "shallow" (eg. tree-based) and neural network-based methods to show the efficacy and efficiency of on tabular learning.
Shallow baselines:
* Logitic Regression <cit.> is a linear classification algorithm that models the relationship between input variables and a binary outcome using a logistic function. It is widely used due to its simplicity, interpretability, and ability to handle large datasets efficiently.
* Xgboost <cit.> is an advanced implementation of gradient boosting algorithms. It has gained great popularity in machine learning competitions (e.g., Kaggle) and has been considered the dominant approach to modeling tabular data for a long time.
* LightGBM <cit.> is another gradient boosting tree framework. It employs a novel approach called "Gradient-based One-Side Sampling" (GOSS) to achieve faster training speeds and lower memory usage.
Neural network-based baselines:
* MLP (Multilayer Perceptron) <cit.> is a basic feed-forward fully connected artificial neural network architecture, but is considered a competitive neural network approach on tabular data.
* TransTab <cit.> is a newly proposed tabular framework that combines column description and table cells as the raw input to a transformer and is the current state-of-the-art tabular model.
* FT-Transformer <cit.> is a adaptation of the Transformer architecture <cit.> for the tabular data (Feature Tokenizer + Transformer).
* TabNet <cit.> uses sequential attention to simulate the process of tree decision-making, enabling interpretability and more efficient learning on tabular data.
* VIME <cit.> is a self- and semi-supervised learning framework specifically designed for tabular data.
* SAINT <cit.> is a newly proposed hybrid deep learning approach to solving tabular data problems and performs attention over both rows and columns.
* DCN-v2 <cit.> is an improved version of Deep & Cross Network (DCN), and claimed to be able to automatically and efficiently captures feature interactions in tabular data.
* AutoInt <cit.> is a click-through prediction, which is a type of structured data task, model. It uses a multi-head self-attentive neural network to learn the high-order feature interactions of input features.
§.§.§ Metrics
We follow previous work <cit.> using AUC <cit.> as the main evaluation metric and improve on it using 5-fold cross-validation <cit.> as the final result. Note that within each fold of the training set, we partitioned 20% as a validation set, which was utilized for hyperparameter selection and early stopping. For the sake of fairness, we employed the identical dataset splitting setting for all baseline algorithms and on all downstream task datasets.
§.§.§ Implementation Details
For details of all baseline implementations see Appendix <ref>, while the settings for all baselines remain consistent across all experiments unless otherwise specified. In the data pre-processing phase, we scale numerical features to [0, 1] by min-max normalization in all methods. For classification features, we use ordinal codes to represent them in all baselines. However, note that in our , we use the raw textual values of the categorical features in order to better exploit their semantic information. uses a 4-layer transformer, where the embedding dimension of the token is 128, the hidden dimension of the middle dense layer is 256, and the self-attention module has 8 heads. We use a dropout of 0.3 in all attention layers and feed-forward layers. We choose ReLU for all activation functions. The supervised pre-training method is trained on 1000 labeled datasets, and the self-supervised pre-training method is trained on all 2000 datasets. We train using Adam <cit.> optimizer with a learning rate in {5e-5, 1e-4, 3e-4}, where the learning rate of the fine-tuning phase will be smaller than that of the pre-training phase. Batch size is in {64, 128, 256}. We use a pre-trained BERT-base-uncased <cit.> model on Hugging Face[https://github.com/huggingface] to obtain token embeddings that are rich in semantic information. In the pre-training phase, we set the maximum training epoch to 500 for both the supervised contrastive learning and the self-supervised masked table modeling tasks. In the fine-tuning phase, the maximum training epoch is 200 and the patience value is set to 20 for early stopping. Experiments were conducted with 8 GPU V100, Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz, and 128GB RAM. We use the DeepSpeed <cit.> framework for parallel computation acceleration. DeepSpeed offers a range of optimization techniques, including model parallelism, data parallelism, and mixed-precision training. It can improve the efficiency of our large-scale cross-table pre-training which occupies a large portion of the computational resources in our experiments.
§.§ Overall Performance
In this section, we report the overall performance of . The results are shown in Table <ref>.
§.§.§ Supervised Learning from Scratch
As can be seen in Table <ref>, _NoPT outperforms all the existing works on standardized benchmarking datasets on average. Although TransTab <cit.> greatly outperforms the baseline method, _NoPT is still slightly higher than TransTab on avg. 0.8%. We analyze this due to the fact that _NoPT models at the feature-level, while TransTab <cit.> models at the token-level, which may be not reasonable on tabular data. And the experimental results also show that TransTab's performance drops abruptly on some datasets, such as car and phishingweb. In addition, we found that _NoPT is also comparable to FT-transformer <cit.> and SAINT <cit.>.
We analyze and believe that _NoPT is essentially the same as these methods on single tabular data, which extract features from table data and then model feature interactions using a similar transformer encoder.
However, the difference is that _NoPT can receive input from heterogeneous tables. This gives our approach a natural advantage in cross-table pre-training which is detailed in Section <ref>.
§.§.§ Cross-table Pre-training.
We mainly compare with the supervised learning from scratch of .
Supervised: In labeled scenarios, our supervised contrastive learning cross-table pre-training model _P_S has achieved state-of-the-art average performance. As evident from the results in Table <ref>, _P_S outperforms the supervised training from scratch _NoPT by avg. 1.29% and achieves better performance on 10 out of 15 diverse downstream tabular tasks.
Moreover, we observed that _P_S achieves a comparatively competitive performance than masked table modeling self-supervised cross-table pre-training method on average. We analyze that the reason lies in _P_S's ability to fully leverage the label information, enabling the model to learn more powerful sample representations. And self-supervised methods may require a larger amount of training data to achieve significant advancements.
Self-supervised: In large-scale unlabeled scenarios, as can be seen in Table <ref>, our masked table modeling self-supervised cross-table pre-training model _P_M outperforms the supervised training from scratch _NoPT by avg. 1.2%. And _P_M achieves better performance on 13 out of 15 diverse downstream tabular tasks. It is noteworthy that our cross-table pre-trained model exhibits significant improvements on the cylinder-bands, higgs, and Amazon datasets. We hypothesize that this result can be attributed to the presence of certain tables in the pre-training data that bear close relevance to these downstream tasks. Therefore, we have reason to believe that masked table modeling cross-table pre-training approach on ultra-large-scale datasets is a highly promising approach on the path toward a comprehensive universal table model.
is the first attempt at such large-scale cross-table pre-training. Our experimental results demonstrate the feasibility of learning shareable knowledge across different tables through cross-table pre-training, which helps the model achieve better generalization on diverse downstream tasks. Both supervised pre-training and self-supervised pre-training methods achieved good performance. We believe that supervised training requires higher dataset requirements but may be better suited for specific scenarios, while self-supervised training has the potential for greater scalability through larger pre-training datasets in the future.
§.§ Few-shot Learning
As widely recognized, a significant advantage of pre-trained models is that they still work well when the downstream task dataset is relatively scarce, commonly referred to as few-shot learning
<cit.>. This capability stems from that the model can learn rich shareable knowledge from large-scale upstream datasets. In the domain of tabular tasks, there are numerous practical application scenarios characterized by limited data resources, such as medical diagnosis <cit.>. In such contexts, the exceptional few-shot learning ability of pre-trained models becomes invaluable. Therefore, we conducted extensive experiments to explore the practical effectiveness of in the context of few-shot learning settings.
Specifically, for each downstream classification tabular data, we randomly sampled 5/10/20 samples from each class to construct three new 5-shot/10-shot/20-shot tabular datasets. We then performed both supervised training from scratch and pre-training then fine-tuning on these new few-shot datasets. The experimental results are presented in Table <ref>. The self-supervised and supervised pre-trained models significantly outperformed the baseline of learning from scratch in the few-shot learning setting. In 5-shot case, _P_S outperforms the training from scratch _NoPT by avg. 8.4%, and _P_M also surpassed by avg. 3.58%. Furthermore, we can observe that the pre-trained model exhibits a greater improvement in performance when the number of samples is less. The improvement is most significant in the 5-shot case while is relatively weaker in the 20-shot case. We analyze this as a reasonable phenomenon. The shareable knowledge learned through cross-table pre-training is relatively more valuable when the training data is less. In conclusion, all these experimental results strongly demonstrate the tremendous potential of cross-table pre-training in the context of few-shot learning.
§.§ Ablation Studies
In order to demonstrate that modeling at the feature level is more effective than previously used word token-level modeling in tabular data, we conducted ablation experiments. Specifically, we do not pool all word token embeddings into one feature embedding but feed them directly into the transformer layer for learning. The experimental result is presented in Table <ref> and proves that feature-level modeling is significantly better than word token-level modeling. Additionally, we further evaluated different pooling strategies: average pooling, max pooling, and self-attention <cit.> pooling. The results are shown in Table <ref>. Among these strategies, average pooling gives the best results. We tried to analyze the reason that max-pooling may not be able to distinguish between different feature values in some cases. For example, the max value may come from the word token embedding in the column name, which is the same for all the sample rows. The self-attention mechanism may be too complex relative to this simple information extraction. And average pooling can do this task simply and efficiently.
§.§ Further Analysis
§.§.§ Convergence Curves
Figure <ref> compares the convergence curves of two paradigms: "training from scratch" and "pre-training then fine-tuning". We observed that pre-training and then fine-tuning leads to faster convergence and better results. This demonstrates that has learned beneficial shareable knowledge for downstream tasks through cross-table pre-training. Furthermore, pre-training and then fine-tuning can achieve reasonable results within a short period of time. This significantly improves the efficiency of executing downstream tasks that do not require high precision. It also partially alleviates the longer training time issue associated with neural network training compared to traditional tree-based machine learning methods <cit.>.
§.§.§ Masking Ratio
Previous research <cit.> has suggested that a higher mask rate is required to achieve better performance in masked image modeling tasks, whereas a lower mask rate is sufficient for masked language modeling tasks. In this experiment, we further investigate the impact of mask rates on masked table modeling tasks, as shown in Figure <ref>. We found that the model has high performance between 30% and 50%, with an excessively high mask rate leading to a steep descent, while an excessively low mask rate leads to a more moderate descent. We analyze that table data exhibits high information density, where a change in a single feature value can significantly alter the meaning of a sample. So too high a mask rate will cause the model to have difficulty in learning the correct feature relationships.
§.§.§ Hyperparametric Sensitivity Analysis.
We analyzed the sensitivity of the number of randomly sampled partitions and the learning rate. We randomly selected some datasets to experiment with the _P_S method. The experimental results are shown in Fig. <ref>. The settings are consistent with Section <ref> except for the corresponding hyperparameters. It can be seen that is robust to the hyperparameters.
§ CONCLUSION
With CT-BERT and TabPretNet, we hope to initiate the scaled cross-table pre-training for the community of database and data mining community.
Speaking humbly, we deem CT-BERT as a pioneer work to scale tabular data pre-training that it works in either a supervised and/or self-supervised manner.
We empirically demonstrate that facilitating the pre-training procedure across large-scale tabular datasets indeed offers decent efficacy benefits.
Perceiving it through the lens of the development of current LLMs, our model is still small (50M), which is roughly the same size as BERT-base <cit.> in spite of CT-BERT being the largest-scaled pre-trained model in tabular modeling thus far.
We think that for tabular data pre-training, we are still in the era of the BERT model in NLP tracking back a few years. That is to say, the size of the large model and the volume of the dataset still fall far behind the development of the LLMs, such as ChatGPT or its other rivals <cit.>.
On the bright side, the volume of available tabular data is truly gigantic — wherever a database system is deployed there will be tabular data — but perhaps much more decentralized than the text and vision data.
In the future, we hope to explore even further scaling CT-BERT and adapting it to more diversified data domains.
§ APPENDIX
§.§ Baseline architecture and implementation
The setup of our baseline follows the previous work <cit.> and includes the following methods:
* Logistic Regression: Use the default setting of the package Scikit-Learn. The maximum number of estimators is set to 1000.
* XGBoost: Implemented based on the XGBoost package. We set the maximum number of estimators in {50, 100, 300} and the max depth in {5, 8, 10}.
* LightGBM: Implemented based on the LightGBM. We set the maximum number of estimators in {50, 100, 300} and the max depth in {5, 8, 10}.
* MLP: Dense layers with hidden dimensions {256, 256}. Dropout with a rate of 0.1 is used. They are trained with batch size ∈ {16, 32, 64, 128}, learning rate ∈ {5e-5, 1e-4, 1e-3}, and early stopping patience of 5 with 100 maximum epochs.
* TabNet: Use the official implementation with the default recommended parameters[https://github.com/dreamquark-ai/tabnet]. Trained with batch size ∈ {16, 32, 64, 128}, learning rate ∈{1e-4, 1e-3, 2e-2}, n_a,n_b ∈ {8, 16, 64, 128}, γ ∈ {1.3, 1.5, 1.8}, categorical embedding dimension ∈ {1, 8, 16} and early stopping patience of 5 with 100 maximum epochs.
* DCN-v2: Use the implementation by paper <cit.>[https://github.com/Yura52/tabular-dl-revisiting-models]. The number of cross is 2. The dropout rate for the feedforward component is 0.1. MLP part has two dense layers of dimension {256, 128}. Trained with batch size ∈ {16, 32, 64, 128}, learning rate ∈ {5e-5, 1e-4, 1e-3}, and early stopping patience of 10 in 100 maximum epochs.
* AutoInt: Use the implementation by paper <cit.><ref>. The attention layer number is set to 2. The attention head number is set to 2. MLP part has two dense layers of dimension 256, 128; dropout deactivated; trained with batch size ∈ {16, 32, 64, 128}, learning rate ∈ {5e-5, 1e-4, 1e-3}, and early stopping patience of 10 in 100 maximum epochs.
* SAINT: Use the official implementation[https://github.com/somepago/saint]. The embedding size is 32 dimensions. 6 transformer layers are used. The number of heads of attention is ∈ {4, 8}. The dropout rate is 0.1 in all attention layers and feed-forward layers. Inside the self-attention layer, the q, k, and v vectors are of dimension 16, and in the intersample attention layer, they are of size 64.
* FT-Transformer: Use the official implementation[https://github.com/Yura52/rtdl]. Feed-forward component has 128 dimensions. 2 transformer layers are used. The number of heads of attention is ∈ {2, 4, 8}. The dropout rate is 0.1.
* VIME: We reproduce it by PyTorch <cit.> based on the original official implementation[https://github.com/jsyoon0823/VIME]. We train the model on all training data taking mask rate 0.3, batch size 128, learning rate 1e-4, and 10 epochs. During the fine-tuning phase, we add a classifier after the encoder with three dense layers of 100 dimensions and ReLU activations. Trained with batch size ∈ {16, 32, 64, 128}, learning rate ∈ {5e-5,1e-4,1e-3}, and early stopping patience of 10 in 100 maximum epochs.
* TransTab: Use the official implementation[https://github.com/RyanWangZf/transtab]. Token embedding has 128 dimensions. 2 transformer layers are used. The number of heads of attention is 8. We train the model on all downstream task data taking batch size 64, learning rate 1e-4, dropout rate 0, and early stopping patience of 10 in 100 maximum epochs. We run the pre-training, transfer learning, and vanilla supervised training methods in the paper, and take the highest score.
§.§ Details of the downstream task datasets
The downstream task datasets are mainly from the OpenML-CC18 benchmark <cit.>.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.04581v1 | 20230710141926 | Galerkin-Bernstein Approximations of the System of Time Dependent Nonlinear Parabolic PDEs | [
"Hazrat Ali",
"Nilormy Gupta Trisha",
"Md. Shafiqul Islam"
] | math.NA | [
"math.NA",
"cs.NA"
] |
NANOGrav spectral index γ=3 from
melting domain walls
E. Babichev^a, D. Gorbunov^b,c, S. Ramazanov^d, R. Samanta^d, A. Vikman^d
^aUniversité Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France
^bInstitute for Nuclear Research of the Russian Academy of Sciences, 117312 Moscow, Russia
^cMoscow Institute of Physics and Technology, 141700 Dolgoprudny, Russia
^dCEICO, FZU-Institute of Physics of the Czech Academy of Sciences,
Na Slovance 1999/2, 182 00 Prague 8, Czech Republic
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
The purpose of the research is to find the numerical solutions to the system of time dependent nonlinear parabolic partial differential equations (PDEs) utilizing the Modified Galerkin Weighted Residual Method (MGWRM) with the help of modified Bernstein polynomials. An approximate solution of the system has been assumed in accordance with the modified Bernstein polynomials. Thereafter, the modified Galerkin method has been applied to the system of nonlinear parabolic PDEs and has transformed the model into a time dependent ordinary differential equations system. Then the system has been converted into the recurrence equations by employing backward difference approximation. However, the iterative calculation is performed by using the Picard Iterative method. A few renowned problems are then solved to test the applicability and efficiency of our proposed scheme. The numerical solutions at different time levels are then displayed numerically in tabular form and graphically by figures. The comparative study is presented along with L_2 norm, and L_∞ norm.
Keywords: Parabolic PDE System, Modified Galerkin Method, Modified Bernstein Polynomial, Backward Difference Method, Gray-Scott Model
§ INTRDUCTION
Reaction-diffusion systems have been extensively studied during the 20^th century. The study of the reaction-diffusion system reveals that different species have interactions with one another and that after these interactions, new species are created via chemical reactions. The solution of the reaction-diffusion system shows the chemical reaction's underlying mechanism and the various spatial patterns of the chemicals involved.
Animal coats and skin coloration have been linked to reaction-diffusion processes, which have been considered to constitute a fundamental basis for processes associated with morphogenesis in biology.
There are numerous notable examples of coupled reaction-diffusion systems such as the Brusselator model, Glycolysis model, Schnackenberg model, Gray-Scott model, etc. With the help of the system size expansion, a stochastic Brusselator model has been suggested and investigated in the study cited in <cit.>. The reaction-diffusion Brusselator model has been addressed by Wazwaz et al. through the decomposition technique <cit.>. Because of its potential to provide a close analytical solution, the fractional-order Brusselator model was studied by Faiz et al <cit.>. The Brusselator system stability of a reaction-diffusion cell as well as the Hopf bifurcation analysis of the system have been detailed by Alfifi <cit.>. Qamar has analyzed the dynamics of the discrete-time Brusselator model with the help of the Euler forward and nonstandard difference schemes <cit.>. The research article cited in <cit.> has been prepared by investigating the numerical analysis of the Glycolysis model using a well-known finite difference scheme.
Adel et al <cit.> have examined the synchronization problem of the Glycolysis reaction-diffusion model and designed a novel convenient control law. David et al <cit.> have analyzed the stability of turing patterns of the Schnackenberg model. Liu et al <cit.> have developed the bifurcation analysis of the aforementioned model. Khan et al. <cit.> have established a scheme for the solution of the fractional order Schnackenberg reaction-diffusion system. Numerical explorations have been applied to analyze the pattern formations of the model in the research article cited in <cit.>. Gray and Scott <cit.> were the first to introduce the Gray-Scott model. They have proposed this model as an alternative to the autocatalytic model of Glycolysis <cit.>. For this model, Pearson <cit.> has employed experimental studies to depict several sophisticated spot-type structures. Mazin et al. <cit.> have conducted an experiment using a computer simulation to investigate a range of far-from-equilibrium occurrences that emerge in a bistable Gray-Scott model. Many renowned authors <cit.> have evaluated the preceding model in which self-replicating structures have been noticed. McGough et al. <cit.> have conducted research on the bifurcation analysis of the patterns that are depicted in the model. In the research cited in <cit.>, the linear stability and periodic stationary solutions of this model have been investigated. Some analytical results of this model have also been explored <cit.>. Several prominent authors have studied the spatiotemporal chaos of the model in the research studies cited in <cit.> and <cit.>. Furthermore, Wei <cit.> has analyzed the pattern formation of the two-dimensional Gray-Scott model. The model has also been explored by Kai et al. <cit.> using an innovative technique known as the second-order explicit implicit methodology. In recent years, the nonlinear Galerkin finite element approach has become increasingly prevalent as a means to investigate the model <cit.>. Mach <cit.> has performed an in-depth examination of the quantitative evaluation of the model's numerical solution. In references <cit.> and <cit.>, the Gray-Scott reaction-diffusion system has been the subject of extensive wave modeling studies by eminent scholars. The simulation of the coupled model has been carried out by Owolabi et al. <cit.> using the higher-order Runge-Kutta method. The well-known Gray-Scott model's numerical findings have been calculated using the help of the hyperbolic B-spline <cit.>. In order to analyze the ionic version of the model while it is being affected by an electric field, the Galerkin method has been deployed <cit.>. With the use of the hybrid-asymptotic numerical method, Chen et al. <cit.> have investigated the model's dynamic behavior and stability. In the research study cited in <cit.>, special polynomials have been employed to numerically solve the Gray-Scott model. Han et al. <cit.> have conducted an exhaustive investigation on the three-dimensional Gray-Scott model. In the process of assessing the model, the cubic B-spline has proven to be of considerable use by Mittal et al <cit.>.
In the disciplines of engineering and mathematical physics, the Weighted Residual Method is an approximation method that can be leveraged to resolve problems. Analysis of structures, thermal expansion, stream of fluids, movement of masses, and the electromagnetic potential, etc. are examples of prominent problem fields of concern. Several distinct Weighted Residual Method variations are within our reach. The Galerkin Weighted Residual Method (also known as GWRM) has been put into practice for centuries, long before the invention of computers. It is generally agreed that this strategy is one of the best and most often used approaches available. Lewis and Ward have provided a comprehensive overview of the process in the article that is referenced in <cit.>. This methodology has been effectively implemented in the well-known Black-Scholes model by Hossan et al. <cit.>. Shirin et al. <cit.> have employed the Galerkin method in conjunction with other special polynomials to analyze the Fredholm equations. In the research referred to in <cit.>, the approach was utilized to solve boundary value problems. In addition, this method has been used to perform a numerical calculation of the eigenvalues associated with the Sturm-Liouville problem <cit.>. There have been several successful uses of this method for problems involving metal beams and polygonal ducts with rounded edges <cit.>.
The objective of this study is to employ the modified Galerkin Weighted Residual Method in conjunction with the appropriate special polynomials to numerically evaluate the one-dimensional reaction-diffusion systems. Based on our best information, this study is presently unavailable. In addition to that, the study has provided the validation necessary to use the approach in one-dimensional reaction-diffusion systems. The main merit and advantage of the study are that by solving this type of system of equations, we will be able to analyze the behavior of the ecological system and forecast its future.
The article is split up into four sections. Section 2 provides a detailed explanation of the formulation of our proposed method to solve the system of nonlinear parabolic partial differential equations. In the third section, the approach's implications are shown while analyzing the aforementioned system. Numerical and graphical representations are included here as well. The fourth section contains some concluding remarks and a general discussion.
§ MATHEMATICAL FORMULATION
Let us commence with the following system over the domain [-L, L]
2.0.
[ ∂ M/∂ t=ε_1∂^2 M/∂ x^2 - f(M,N) + p(1-M); ∂ N/∂ t=ε_2∂^2 N/∂ x^2+ f(M,N) -(p+q)N ]}
The boundary and initial conditions are as follows:
.
[ 1.0 M(-L,t)=M(L,t)=θ_0; N(-L,t)=N(L,t)=γ_0 ]}
and
1.0.
[ M(x,0)=M_0(x); N(x,0)=N_0(x) ]}
Let us assume the approximate solutions of System (<ref>) be of the form
.
[ 0.9M(x,t)=θ_0+∑_j=0^nc_j(t)B_j(x); N(x,t)=γ_0+∑_j=0^nd_j(t)B_j(x); ]}
where B_j's are the modified Bernstein polynomials and c_j and d_j are the coefficients dependent on time.
The first terms of the approximate solutions (<ref>) have come from the boundary conditions of the system. The modified Bernstein polynomials are defined as follows:
B_n,m(x)= mn(x-L)^n(U-x)^m-n(x-L)(U-x)/(U-L)^m n=0,1,2,..., m
where U & L are the upper and lower limits of x.
The last terms of Solution (<ref>) will vanish at the boundary points.
Therefore, the residual functions are
1.8.
[ R_1(x,t)=∂M/∂ t-ε_1∂^2 M/∂ x^2 + f(M,N) - p(1-M); R_2(x,t)=∂N/∂ t-ε_2∂^2 N/∂ x^2- f(M,N) +(p+q)N; ]}
Now we form the residual equations as:
∫_-L^LR_1(x,t) B_i(x)dx=0
∫_-L^LR_2(x,t) B_i(x)dx=0
From the first residual equation, we can write
∫_-L^L[∂M/∂ t-ε_1∂^2 M/∂ x^2 + f(M, N) - p(1-M)]B_i(x)dx=0
Now we apply integration by parts in the above equation
∫_-L^L∂M/∂ tB_idx+∫__L^Lε_1∂M/∂ x∂ B_i/∂ xdx + ∫_-L^L f(M, N) B_idx-∫_-L^Lp(1-M)B_idx=ε_1[∂M/∂ xB_i]_-L^L
Then we substitute solution (<ref>) in Equation (<ref>). Therefore, the equation becomes,
∫_-L^L∂/∂ t(θ_0+∑_j=0^nc_jB_j)B_idx+∫_-L^Lε_1∂/∂ x(θ_0+∑_j=0^nc_jB_j)∂ B_i/∂ xdx +∫_-L^Lf(θ_0+∑_j=0^nc_jB_j, γ_0+∑_j=0^nd_jB_j)B_i dx
-∫_-L^Lp(1-(θ_0+∑_j=0^nc_jB_j))B_idx=ε_1[∂/∂ x(θ_0+∑_j=0^nc_jB_j)B_i]_-L^L
or,∫_-L^L∂θ_0/∂ tB_idx+∫_-L^L∑_j=0^n∂ c_j/∂ tB_j B_idx+∫_-L^Lε_1∂θ_0/∂ x∂ B_i/∂ xdx+∑_j=0^nc_j∫_-L^Lε_1∂ B_j/∂ x∂ B_i/∂ xdx
+∫_-L^Lf(θ_0+∑_j=0^nc_jB_j, γ_0+∑_j=0^nd_jB_j)B_i dx-∫_-L^LpB_idx+∫_-L^Lpθ_0 B_idx+∑_j=0^nc_j∫_-L^LpB_jB_idx
=ε_1[∂θ_0/∂ xB_i]_-L^L+ε_1[∑_j=0^nc_j∂ B_j/∂ xB_i]_-L^L
This finally becomes
∫_-L^L∂θ_0/∂ tB_idx+∫_-L^L∑_j=0^n∂ c_j/∂ tB_j B_idx+∫_-L^Lε_1∂θ_0/∂ x∂ B_i/∂ xdx+∑_j=0^nc_j∫_-L^Lε_1∂ B_j/∂ x∂ B_i/∂ xdx
+∫_-L^LΓ(θ_0, γ_0, ∑_k=0^nc_kB_k, ∑_l=0^nd_lB_l)B_i dx+∑_j=0^nd_j∫_-L^LΩ(θ_0, γ_0, ∑_k=0^nc_kB_k, ∑_l=0^nd_lB_l)B_j B_idx
-∫_-L^LpB_idx+∫_-L^Lpθ_0 B_idx +∑_j=0^nc_j∫_-L^LpB_jB_idx =ε_1[∂θ_0/∂ xB_i]_-L^L+ε_1[∑_j=0^nc_j∂ B_j/∂ xB_i]_-L^L
The first terms on both sides, and third terms on the left-hand side Equation (<ref>) become zero because of boundary conditions. Therefore, the equation reduces to,
∑_j=0^nd c_j/dt∫_-L^LB_j B_idx+∑_j=0^nc_j(∫_-L^Lε_1dB_j/dxd B_i/dxdx+∫_-L^LpB_jB_idx-ε_1[d B_j/dxB_i]_-L^L)
+∑_j=0^nd_j∫_-L^LΩ(θ_0, γ_0, ∑_k=0^nc_kB_k, ∑_l=0^nd_lB_l)B_j B_idx=-∫_-L^LΓ(θ_0, γ_0, ∑_k=0^nc_kB_k, ∑_l=0^nd_lB_l)B_i dx
+∫_-L^LpB_idx -∫_-L^Lpθ_0 B_idx
The derivative and non-derivative terms of Equation (<ref>) can be summarized via standard matrix notation as follows:
[C_1]{dc_j/dt}+[K_1]{c_j}+[K_2]{d_j}=[F_1]
where
C_1_ij= ∫_-L^LB_j B_idx
K_1_ij= ∫_-L^Lε_1dB_j/dxd B_i/dxdx+∫_-L^LpB_jB_idx-ε_1[d B_j/dxB_i]_-L^L
K_2_ij= ∫_-L^LΩ(θ_0, γ_0, ∑_k=0^nc_kB_k, ∑_l=0^nd_lB_l)B_j B_idx
F_1_i= -∫_-L^LΓ(θ_0, γ_0, ∑_k=0^nc_kB_k, ∑_l=0^nd_lB_l)B_i dx+∫_-L^LpB_idx -∫_-L^Lpθ_0 B_idx
Here, K_1 and K_2 are n × n matrices, C_1 is n × n matrix, and F_1 is n × 1 matrix. The first two matrices K_1 and K_2 are called stiffness matrices. The other two matrices C_1 and F_1 are called forced matrix, and load vector respectively.
Therefore, we apply the backward difference method on the first term of Equation (<ref>) and rearrange the resulting terms as follows:
2.0 [C_1]{c_j-c_j-1/Δ t}+[K_1]{c_j}+[K_2]{d_j}=[F_1]
Or, (1/Δ t[C_1]+[K_1]){c_j}+[K_2]{d_j}=1/Δ t[C_1]{c_j-1}+[F_1]
The second residual equation can be written as,
∫_-L^L[∂N/∂ t-ε_2∂^2 N/∂ x^2- f(M,N) +(p+q)N]B_i(x)dx=0
After employing integration by parts and then substitution of (<ref>) reduces the above equation,
∫_-L^L∂/∂ t(γ_0+∑_j=1^nd
_jB_j)B_idx+∫_-L^Lε_2∂/∂ x(γ_0+∑_j=1^nd_jB_j)∂ B_i/∂ xdx+∫_-L^L(p+q)(γ_0+∑_j=0^nd_jB_j)B_idx
-∫_-L^Lf(θ_0+∑_j=0^nc_jB_j, γ_0+∑_j=0^nd_jB_j)B_i dx=ε_2[∂/∂ x(γ_0+∑_j=0^nd_jB_j)B_i]_-L^L
or, ∫_-L^L∂γ_0/∂ tB_idx+∫_-L^L∑_j=0^n∂ d_j/∂ tB_j B_idx+∫_-L^Lε_2∂γ_0/∂ x∂ B_i/∂ xdx+∑_j=0^nd_j∫_-L^Lε_2∂ B_j/∂ x∂ B_i/∂ xdx
-∫_-L^LΠ(θ_0, γ_0, ∑_l=0^nd_lB_l)B_i dx-∑_j=0^nc_j∫_-L^LΦ( γ_0, ∑_l=0^nd_lB_l)B_j B_idx+∑_j=0^nd_j∫_-L^L(p+q)B_jB_idx
=-∫_-L^L(p+q)γ_0B_idx+ε_2[∂γ_0/∂ xB_i]_-L^L+ε_2[∑_j=0^nd_j∂ B_j/∂ xB_i]_-L^L
Since the first, and third terms on the left-hand side and the first term on the right-hand side of Equation (<ref>) become zero, the equation reduces to,
∑_j=0^nd d_j/dt∫_-L^LB_j B_idx+∑_j=0^nd_j(∫_-L^Lε_2dB_j/dxd B_i/dxdx+∫_-L^L(p+q)B_jB_idx-ε_2[d B_j/dxB_i]_-L^L)
-∑_j=0^nc_j∫_-L^LΦ(γ_0, ∑_l=0^nd_lB_l)B_j B_idx =∫_-L^LΠ(θ_0, γ_0, ∑_l=0^nd_lB_l)B_i dx -∫_-L^L(p+q)γ_0B_idx
The derivative and non-derivative terms of Equation (<ref>) can be summarized via standard matrix notation as follows:
[C_2]{dd_j/dt}+[K_3]{c_j}+[K_4]{d_j}=[F_2]
where
C_2_ij= ∫_-L^LB_j B_idx
K_3_ij= -∫_-L^LΦ(γ_0, ∑_l=0^nd_lB_l)B_j B_idx
K_4_ij= ∫_-L^Lε_2dB_j/dxd B_i/dxdx+∫_-L^L(p+q)B_jB_idx-ε_2[d B_j/dxB_i]_-L^L
F_2_i= ∫_-L^LΠ(θ_0, γ_0, ∑_l=0^nd_lB_l)B_i dx -∫_-L^L(p+q)γ_0 B_idx
Here, K_3 and K_4 are n × n matrices, C_2 is n × n matrix, and F_2 is n × 1 matrix. They are called stiffness matrices, forced matrices, and load vectors respectively.
The application of the backward difference method on the first term of Equation (<ref>) results in the following equation,
(1/Δ t[C_2]+[K_4]){d_j}+[K_3]{c_j}=1/Δ t[C_2]{d_j-1}+[F_2]
By assembling Equations (<ref>) and (<ref>), we get the following recurrent system,
2.0.
[ (1/Δ t[C_1]+[K_1]){c_j}+[K_2]{d_j}=1/Δ t[C_1]{c_j-1}+[F_1]; [K_3]{c_j}+(1/Δ t[C_2]+[K_4]){d_j}=1/Δ t[C_2]{d_j-1}+[F_2] ]}
To calculate the initial values of c_j and d_j, the initial conditions are set in Galerkin sense as follows,
∫_-L^LM(x,0)B_idx=∫_-L^LM_0(x)B_idx
or, ∫_-L^L(θ_0+∑_j=1^nc_j(0)B_j(x))B_idx=∫_-L^LM_0(x)B_idx
equivalently, ∑_j=0^nc_j(0) ∫_-L^LB_jB_idx=∫_-L^LM_0(x)B_idx-∫_-L^Lθ_0 B_idx
and
∫_-L^LN(x,0)B_idx=∫_-L^LN_0(x)B_idx
equivalently, ∫_-L^Lγ_0 B_idx+∫_-L^L∑_j=0^nd_j(0) B_jB_idx=∫_-L^LN_0(x)B_idx
or, ∑_j=0^nd_j(0) ∫_-L^LB_jB_idx=∫_-L^LN_0(x)B_idx-∫_-L^Lγ_0 B_idx
This process will help us to evaluate the numerical solutions of the nonlinear reaction-diffusion systems.
§ NUMERICAL EXAMPLES AND APPLICATIONS
In this section, the previously described approach has been implemented into practice by solving a few examples of practical issues. Our methodology has been shown to be valid after being applied to the first test problem. The aforementioned procedure is then used, with a variety of parameters, to assess the subsequent test problems. The L_2 norm and L_∞ norm has been determined by the following expression,
L_2 Norm=||M_Δ t-M_Δ t/2||_2
L_∞ Norm=||M_Δ t-M_Δ t/2||_∞
Where Δ t is the time increment and M_Δ t is the approximate solution obtained using time increment Δ t.
Test Problem 1: Let us consider the system of the parabolic equations from the study of Manaa et. al.<cit.>
2.0.
[ ∂ M/∂ t=ε_1 ∂^2 M/∂ x^2 + f(M,N) -(p+q)M; ∂ N/∂ t=ε_2∂^2 N/∂ x^2- f(M,N) +p(1-N); ]}
where f(M,N)=M^2N and x∈ [a,b], t≥ 0. The boundary conditions and the initial conditions are considered as:
1.0.
[ M(a,t)=M(b,t)=0; N(a,t)=N(b,t)=1; ]}
and
1.0.
[ M(x,0)=0.01 sin(π (x-b)/(b-a)); N(x,0)=1-0.12 sin(π (x-b)/(b-a)); ]}
The domain of the model is [a, b]. The values of the parameters are taken as a=0, b=2, ε_1=ε_2=0.01, p=0.09, and q=-0.004.
Here, to obtain the numerical approximation, the effect of boundary conditions is insignificant because all terms of B_j(x) are zero at the boundary points. We have employed the modified Galerkin method to the system of nonlinear partial differential equations (<ref>) and therefore obtained the system of ordinary differential equations with respect to t. In this stage, we have used the α family of approximation in order to convert the system into recurrent relations and then we applied Picard iterative procedure. To find the initial guess of the given system, we have applied the weighted residual procedure on the initial conditions (<ref>).
Tables (<ref>) and (<ref>) provide the numerical results of concentrations M (x,t) and N(x,t) for various values of x. For computation, we have taken Δ t=0.1. The numerical approximations are derived at time levels t=1 and t=2.
Throughout these tables, we have compared the results which we have obtained with the numerical approximations that have already been published in other well-known literature. The table demonstrates that our outcomes are reasonably comparable to those that have been published. It validates the accuracy of our approach to approximating the reaction-diffusion system numerically.
The approximate results M(x, t) and N(x, t) of Equation (<ref>) are presented in the following figure (<ref>).
In Figure (<ref>) we have employed a three-dimensional graphical depiction of approximate solutions of M(x,t) and N(x,t) at different time levels for better understanding. The graphical representations agree with the results that we have obtained in the tables. Eventually, it makes sense clearly that the method is more applicable to solving such nonlinear parabolic PDE systems.
In Figure (<ref>), we have presented the error graph of M(x,t) and N(x,t) at time t=10, where the absolute errors are computed between two different time increments, say Δ t=0.2, Δ t=0.4 and Δ t=0.1, Δ t=0.2.
The L_2 norm and L_∞ norm, are presented in Table (<ref>), which shows that the comparative errors are reduced significantly according to the reduction of the size of the time increments.
Test Problem 2: The Gray-Scott Model is one of the most important models whose wave formations are similar to many waves formed in real life such as butterfly wings, gesticulation, damping, turning patterns, embryos, multiple spots, and so on <cit.>. Let us consider the following model,
2.0.
[ ∂ M/∂ t=ε_1 ∂^2 M/∂ x^2 - f(M,N) +p(1-M); ∂ N/∂ t=ε_2∂^2 N/∂ x^2+ f(M,N) -(p+q)N; ]}
where f(M,N)=MN^2. The boundary conditions and the initial conditions are considered as follows:
1.0.
[ M(-50,t)=M(50,t)=1; N(-50,t)=N(50,t)=0; ]}
and
1.0.
[ M(x,0)=1-0.5 sin^100(π (x-50)/100); N(x,0)=0.25 sin^100(π (x-50)/100); ]}
The domain of the model is [-50, 50]. The values of the parameters are taken as
ε_1=1, ε_2=0.01, p=0.01, q=0.12
Here for computational purposes, we have used 7 modified Bernstein polynomials. By applying the modified Galerkin method, we have used the backward difference method to transform the system of ordinary differential equations into the recurrent relations which is therefore solved by Picard iterative procedure. Numerical data of M(x, t) and N(x, t) of (<ref>) are also presented in tabulated form in the following table at different time steps.
The table shows that the numerical values of concentrations M and N change very slowly with varying values of x. It happens in every time step.
The results obtained by applying our proposed scheme are presented in Figure (<ref>).
Figure (<ref>) is deployed to provide pictorial representations of the numerical concentrations M and N at different time levels. The results that are obtained in the table are shown graphically. The graphs are obtained for different time levels. The graphical presentation shows that the changes in concentrations are sufficiently small for different time levels.
The L_2 norm, and L_∞ norms, are presented in table (<ref>), which shows that the comparative errors are reduced significantly according to the reduction of the size of the time increments. However, the order of convergences increased noticeably along with the reduction of the time length.
In Figure (<ref>), we have presented the error graph of M(x,t) and N(x,t) at time t=10, where the absolute errors are computed between two different time increments say Δ t=0.2, Δ t=0.4 and Δ t=0.1, Δ t=0.2.
§ CONCLUSION
This research study has provided numerical approximations of nonlinear reaction-diffusion systems with specified boundary and initial conditions through the employment of the modified Galerkin method. To generate the trial solution, modified Bernstein Polynomials have been used. The simplification of the weighted residual leads to a system of ordinary differential equations which is then transformed into the recurrent relation by applying the backward difference formula. At this stage, we have used Picard's iterative procedure to approximate the trial solution. After successful derivation, we applied our proposed method to several models in order to test their applicability and effectiveness. We have solved and displayed the results both numerically and graphically. From those figures and numerical results, it is indisputable that our proposed method is an unconditionally stable, efficient, highly modular, and easily expandable method that can be applied to any type of system of nonlinear parabolic partial differential equations regardless of the type of the boundary conditions, type of non-linearity of the functions, coefficients are constants or function of independent variables.
§ ACKNOWLEDGEMENT
The authors acknowledge that the research was supported and funded by Dhaka University research grant under UGC, Bangladesh.
100
r39 Biancalani, T., Fanelli, D., & Di Patti, F., (2010). Stochastic Turing patterns in the Brusselator model. Physical Review E, 81(4), 046215.
r40 Wazwaz, A.-M.(2000). The decomposition method applied to systems of partial differential equations and to the reaction-diffusion Brusselator model. Applied mathematics and computation, 110(2-3).,251-264.
r41 Muhammad Khan, F., Ali, A., Shah, K., Khan, A., Mahariq, I., et al. (2022). Analytical Approximation of Brusselator Model via LADM. Mathematical Problems in Engineering,2022, 01-14.
r42 Alfifi, H. Y., Feedback control for a diffusive and delayed Brusselator model: Semi-analytical solutions. Symmetry, 13(4), 725.
r43 Din, Q. (2018). A novel chaos control strategy for discrete-time Brusselator models. Journal of Mathematical Chemistry, 56(10), 3045-3075.
r36 Ahmed, N., SS, T., Imran, M., Rafiq, M., Rehman, M., & Younis, M. (2019). Numerical analysis of auto-catalytic glycolysis model. AIP Advances, 9(8), 085213.
r45 Ouannas, A., Batiha, I. M., Bekiros, S., Liu, J., Jahanshahi, H., Aly, A. A. & Alghtani, A. H. (2021). Synchronization of the glycolysis reaction-diffusion model via linear control law. Entropy, 23(11), 1516.
r47 Iron, D., Wei, J., & Winter, M. (2004). Stability analysis of Turing patterns generated by the Schnakenberg model. Journal of mathematical biology, 49(4), 358-390.
r48 Liu, P., Shi, J., Wang, Y., and Feng, X. (2013). Bifurcation analysis of reaction-diffusion Schnakenberg model. Journal of Mathematical Chemistry, 51(8),2001-2019.
r49 Khan, F. M., Ali, A., Hamadneh, N., Abdullah & Alam, M. N. (2021). Numerical Investigation of Chemical Schnakenberg Mathematical Model. Journal of Nanomaterials, 2021, 1-8.
r50 Beentjes, C. H. (2015). Pattern formation analysis in the Schnakenberg model (tech. rep.). Technical Report, University of Oxford, UK.
r7 Gray, P. & Scott, S.(1983). Autocatalytic reactions in the isothermal, continuous stirred tank reactor: isolas and other forms of multistability. Chemical Engineering Science, 38(1), 29-43.
r8 Sel'Kov, E. (1968). Self-Oscillations in Glycolysis 1. A Simple Kinetic Model. European Journal of Biochemistry, 4(1), 79-86.
r9 Pearson, J. E. (1993). Complex patterns in a simple system. Science, 261(5118), 189-192.
r10 Mazin, W., Rasmussen, K., Mosekilde, E., Borckmans, P. & Dewel, G. (1996). Pattern formation in the bistable Gray-Scott model. Mathematics and Computers in Simulation, 40(3-4), 371-396.
r11 Doelman, A., Kaper, T. J., & Zegeling, P. A.
(1997). Pattern formation in the one-dimensional Gray-Scott model. Nonlinearity, 10(2), 523.
r15 Ueyama, D. (1999). Dynamics of self-replicating patterns in the one-dimensional Gray-Scott model. Hokkaido mathematical journal, 28(1), 175-210.
r13 McGough, J. S. & Riley, K. (2004). Pattern formation in the Gray–Scott model. Nonlinear analysis: real world applications, 5(1), 105-121.
r21 Doelman, A., Gardner, R., A., & Kaper, T., J. (1998). Stability analysis of singular patterns in the 1D Gray-Scott model: a matched asymptotics approach. Physica D: Nonlinear Phenomena, 122(1-4), 1-36.
r17 Dkhil, F., Logak, E., & Nishiura, Y. (2004). Some analytical results on the Gray–Scott model. Asymptotic Analysis, 39(3-4), 225-261.
r18 Nishiura, Y., & Ueyama, D. (2001). Spatio-temporal chaos for the Gray–Scott model. Physica D: Nonlinear Phenomena, 150(3-4), 137-162.
r19 Nishiura, Y., & Ueyama, D. (2000). Self-replication, self-destruction, and spatio-temporal chaos in the Gray-Scott model. Physical Review Letters, 15(3), 281-289.
r20 Wei, J. (2001). Pattern formations in two-dimensional Gray–Scott model: existence of single-spot solutions and their stability. Physica D: Nonlinear Phenomena, 148(1-2), 20-48.
r22 Zhang, K., Wong, J. C.-F. & Zhang, R., (2008). Second-order implicit–explicit scheme for the Gray–Scott model. Journal of Computational and Applied Mathematics, 213(2), 559-581.
r2 Mach, J. (2012). Application of the nonlinear Galerkin FEM method to the solution of the reaction diffusion equations.
r1 Zhang, R., Zhu, J., Loula, A. F. & Yu, X. (2016). A new nonlinear Galerkin finite element method for the computation of reaction diffusion equations. Journal of Mathematical Analysis and Applications, 434(1), 136-148.
r5 Mach, J. (2010). Quantitative analysis of numerical solution for the Gray-Scott model. SNA’10, 110.
r3 Singh, S. (2023). Numerical investigation of wave pattern evolution in Gray–Scott model using discontinuous Galerkin finite element method. Advances in Mathematical and Computational Modeling of Engineering Systems, 47-58.
r12 Tok-Onarcan, A., Adar, N., & Dag, I. (2019). Wave simulations of Gray-Scott reaction-diffusion system, 42(16), 5566-5581.
r14 Owolabi, K. M. & Patidar, K. C. (2014). Numerical solution of singular patterns in one-dimensional Gray-Scott-like models. International Journal of Nonlinear Sciences and Numerical Simulation, 15(7-8), 437-462.
r4 Kaur, N. & Joshi, V. (2022). Numerical solution to the Gray-Scott Reaction-Diffusion equation using Hyperbolic B-spline. Journal of Physics: Conference Series, 2267(1), 012072.
r6 Thornton, A. & Marchant, T. R. (2008). Semi-analytical solutions for a Gray–Scott reaction–diffusion cell with an applied electric field. Chemical engineering science, 63(2), 495-502.
r24 Chen, W., & Ward, M. J. (2011). The stability and dynamics of localized spot patterns in the two-dimensional Gray–Scott model. SIAM Journal on Applied Dynamical Systems, 10(2), 582-666.
r23 Joshi, V. & Kaur, N. (2020). Numerical Solution of Gray Scott Reaction-Diffusion Equation using Lagrange Polynomial. Journal of Physics: Conference Series, 1531(1), 012058.
r25 Che, H., Wang, Y.-L., & Li, Z.-Y. (2022). Novel patterns in a class of fractional reaction–diffusion models with the Riesz fractional derivative. Mathematics and Computers in Simulation, 202, 149-163.
r26 Mittal, R., Kumar, S. & Jiwari, R. (2022). A cubic B-spline quasi-interpolation algorithm to capture the pattern formation of coupled reaction-diffusion models. Engineering with Computers, 38(2), 1375-1391.
r27 Lewis, P. E. & Ward, J. P. (1991). The finite element method: principles and applications, Addison-Wesley Wokingham.
r28 Hossan, M. S., Hossain, A. S. & Islam, M. S. (2020). Numerical Solutions of Black-Scholes Model by Du Fort-Frankel FDM and Galerkin WRM. International Journal of Mathematical Research, 9(1), 1-10.
r29 Shirin, A., Islam, M., et al. (2013). Numerical solutions of Fredholm integral equations using Bernstein polynomials. arXiv preprint arXiv:1309.6311.
r30 Cicelia, J. E. (2014). Solution of weighted residual problems by using Galerkin’s method. Indian Journal of Science and Technology, 7(3), 52-54.
r31 Farzana, H., Islam, M. S., & Bhowmik, S. K. (2015). Computation of eigenvalues of the fourth order Sturm-Liouville BVP by Galerkin weighted residual method. British Journal of Mathematics and Computer Science, 9, 73-85.
r32 Kang, Z., Wang, Z., Zhou, B. & Xue, S. (2020). Galerkin weighted residual method for axially functionally graded shape memory alloy beams. Journal of Mechanics, 36(3), 331-345.
r33 Arani, A. A. A., Arefmanesh, A., & Niroumand, A. (2018). Investigation of fully developed flow and heat transfer through n-sided polygonal ducts with round corners using the Galerkin weighted residual method. Int. J. Nonlinear Anal. Appl, 9(1), 175-193.
r51 Temam, R. (2012). Infinite-dimensional dynamical systems in mechanics and physics (Vol. 68). Springer Science & Business Media.
r37 Manaa, S. A., Rasheed, J. (2013). Successive and finite difference method for Gray Scott model. Science Journal of University of Zakho, 1(2), 862-873.
r38 Jiwari, R., Singh, S., & Kumar, A. (2017). Numerical simulation to capture the pattern formation of coupled reaction-diffusion models. Chaos, Solitons & Fractals, 103, 422-439.
|
http://arxiv.org/abs/2307.06075v1 | 20230712105302 | Integrating Enzyme-generated functions into CoDiPack | [
"M. Sagebaum",
"M. Aehle",
"N. R. Gauger"
] | cs.MS | [
"cs.MS",
"65D25 (Primary), 68N30 (Secondary)",
"G.1.4; G.4; D.2.2"
] |
Efficient and Joint Hyperparameter and Architecture Search for Collaborative Filtering
Yong Li
======================================================================================
In operator overloading algorithmic differentiation, it can be beneficial to create custom derivative functions for some parts of the code base. For manual implementations of the derivative functions, it can be quite cumbersome to derive, implement, test, and maintain these. The process can be automated with source transformation algorithmic differentiation tools like Tapenade or compiler-based algorithmic differentiation tools like Enzyme. This eliminates most of the work required from a manual implementation but usually has the same efficiency with respect to timing and memory. We present a new helper in CoDiPack that allows Enzyme-generated derivative functions to be automatically added during the recording process of CoDiPack. The validity of the approach is demonstrated on a synthetic benchmark, which shows promising results.
§ INTRODUCTION
The creation of the LLVM infrastructure <cit.> facilitates the development of source analysis and transformation tools. The Enzyme project <cit.> uses this infrastructure to apply algorithmic differentiation (AD) <cit.> to source code at the compiler level. AD describes how computer programs can be modified to automatically compute derivatives alongside the original (also known as primal) computation. For a function y = F(x) with x ∈^m and y ∈^n, the forward mode of AD computes
ẏ = d F/d x(x) ẋ
where ẋ∈^m is the tangent seeding and ẏ∈^n is the derivative result. The reverse mode of AD computes
x̅ = d F/d x^⊤(x) y̅
where y̅∈^n is the adjoint seeding and x̅∈^m is the derivative result. Both modes do not set up the Jacobian matrix. Rather, they compute the derivatives by applying the chain rule and directional derivative on a statement-by-statement level.
Traditionally, AD is applied to a computer program either via operator overloading or source transformation. The operator overloading approach exchanges the computational type like double with a so-called active type of the AD tool implementation like adouble for ADOL-C <cit.> or codi::RealReverse for CoDiPack <cit.>.
In the reverse mode of AD, operator overloading tools record a tape storing all the necessary information for evaluating Equation (<ref>). The data on the tape can be thought of as the computational graph of the program. The tape is then interpreted in a reverse manner to compute the reverse mode of AD. Source transformation tools, like Tapenade <cit.> or OpenAD <cit.>, apply AD by parsing the source code of the program and generating a new code extending it by the additional AD computations.
Operator overloading AD is usually applied to legacy software projects where the original software design did not include AD. Source transformation AD on legacy software usually has problems analyzing the computational dependencies on a global level, which makes it quite cumbersome to apply. On the other hand, source transformation is usually used for new software projects where the software design includes AD. Here, the derivative code generated by source transformation is usually more efficient in terms of memory and computational time than the derivative computation done by operator overloading AD tools.
The approach of Enzyme is similar to source transformation. Enzyme works on the intermediate representation (IR) of LLVM, a language-independent high-level assembly language. It contains enough information such that AD can still be applied without any drawbacks. By using the IR, Enzyme becomes a multi-language AD tool and does not have the problem that it needs to parse and understand the code constructs of the original language. Because Enzyme is similar to source transformation, it inherits the usual drawbacks. Applying Enzyme to a large code base can be cumbersome and a full dependency analysis might not be possible.
A combination of both approaches can be beneficial for the overall runtime and memory consumption. First, a code base can be differentiated by using an operator overloading AD tool. Afterward, the computational hot spots can be handled by Enzyme and integrated into the tape of the operator overloading tool. The derivative functions for the computational hot spots could also be implemented by hand. In doing so, an extra development effort for the implementation, testing, and debugging is required which needs to be repeated every time the original function is changed. Applying Enzyme or a source transformation tool on those hot spots is therefore much more efficient regarding development time.
In this report, we present an extension to CoDiPack which provides a very simple way to add the generated derivative functions from Enzyme to the CoDiPack tapes. The next section will introduce the approach and afterward in Section <ref> the memory footprint of external functions is explained in more detail. Section <ref> describes the Burgers' test case for the performance measurements. The results are discussed in Section <ref>.
§ ADDING ENZYME TO CODIPACK
An integration of Enzyme into a CoDiPack tape is done by so-called external functions. They are called by the tape during the reverse interpretation and are usually used to handle libraries for which the AD tool can not be applied. In the case of Enzyme, they are used to improve the memory and evaluation time of certain code parts.
Adding an external function to CoDiPack can either be done by using the tape interface or by using the ExternalFunctionHelper structure. The latter assumes that the function has the layout
code:funcDefFunction definition for external functions.
void func(double const* x, size_t m, double* y, size_t n,
ExternalFunctionUserData* d)
where x ∈^m is the input, y ∈^n is the output, and d is additional data for the function.
Based on the definition in Listing <ref>, the user has to implement the forward and reverse mode implementation of the function. Following the regular naming conventions of AD, the layout for the functions are:
code:funcDerivDefForward and reverse mode function definition for external functions.
void func_d(double const* x, double const* x_d, size_t m,
double* y, double* y_d, size_t n,
ExternalFunctionUserData* d);
void func_b(double const* x, double* x_b, size_t m,
double const* y, double const* y_b, size_t n,
ExternalFunctionUserData* d);
Instead of implementing func_d and func_b manually, with the mentioned disadvantage, Enzyme can be used to automate the generation of the two functions. The generator conventions for Enzyme <cit.> will actually generate the definitions in Listing <ref> when applied to Listing <ref>. See Appendix <ref> for the Enzyme generation code.
This observation allows for a specialized implementation of the CoDiPack ExternalFunctionHelper for Enzyme. The EnzymeExternalFunctionHelper adds an external function just by providing the function as a template argument. Listing <ref> shows an example usage of the new helper. Here, the function vecPow2 is added to the tape. It raises each element in the vector to the power of two and writes it into the output vector. See Appendix <ref> for a full example.
code:enzymeSmallExampleExample of adding an external function with Enzyme-generated derivatives.
codi::EnzymeExternalFunctionHelper<codi::RealReverse> eh;
eh.template addToTape<vecPow2>(x, size, y, size);
§ DATA MANAGEMENT OF EXTERNAL FUNCTIONS
In this section we want to take a closer look at the data management of external functions in CoDiPack. For optimal performance, it is critical to look at the actual memory size an external function requires on the tape. Functions that are too small will require more memory than the taped version and therefore decrease performance.
The memory required by an external function in CoDiPack can be calculated as
(ext_func(func)) = (overhead)
+ m · ((double) + (int))
+ n ·(int)
where m ∈ is the number of inputs and n ∈ the number of outputs. CoDiPack stores the primal value and the identifier of input AD values. For outputs only the identifier is stored. (overhead) varies for the different tape implementations in CoDiPack but is around 256 bytes. It consists of position information (varies) and the data structure for the external function helper (fixed).
There are some special cases where the above equation can be modified:
* Result y: It can be beneficial to store the primal values of the outputs. This will add another n ·(double) bytes to the memory.
* Primal value tapes: Here, the primal values are available in the primal value vector of the tape and do not need to be stored. This removes m ·(double) from the above equation.
The overhead of around 256 bytes prohibits the use of external functions for smaller functions. An additional large overhead can occur when the same inputs are stored for multiple external functions. It is therefore advisable to use external functions where a large amount of data is handled.
§ BURGERS' TEST CASE
The coupled Burgers' equation is an established test case for the performance comparison of CoDiPack implementations and is described in <cit.> and <cit.>. We want to use the same test case for the performance evaluations in this report.
For completeness, we recapitulate the problem formulation here.
The coupled Burgers' equation <cit.>
u_t + uu_x + vu_y = 1/R(u_xx + u_yy),
v_t + uv_x + vv_y = 1/R(v_xx + v_yy)
is discretized with an upwind finite difference scheme.
The initial and boundary conditions are taken from the exact solution
u(x, y, t) = x + y - 2xt/1 - 2t^2 (x,y,t) ∈ D ×,
v(x, y, t) = x - y - 2yt/1 - 2t^2 (x,y,t) ∈ D ×
given in <cit.>.
The computational domain D is the unit square D = [0,1] × [0,1] ⊂×.
As far as the differentiation is concerned, we choose the initial solution of the time stepping scheme as input parameters, and as output parameter we take the norm of the final solution.
The test case is extended for this report such that Enzyme is applied on one time update step. The differentiated function consists of the 2D loop over the domain for the state update of u and v. A code example is given in Appendix <ref>. For a classification of the Enzyme results, Tapenade is also applied on the same loop. The example is chosen such that no data is written to the stack by Tapenade or Enzyme, which does not add any additional load on the memory bandwidth.
The node for the test case consists of two Intel Xeon 6126 CPUs with a total of 24 cores and 384 GB of main memory.
The computational grid contains 601× 601 elements and is solved for 16 time iterations.
The code is compiled with clang version 14. We remark that similar results are obtained on nodes with Epyc and Haswell CPUs.
All time measurements are averaged over 20 evaluations.
For the time measurements two different configurations are tested:
* The multi test configuration runs the same process on each of the 24 cores, which simulates a use case where the full memory bandwidth of the socket is used.
* The single test configuration runs just one process on the whole node, which eliminates the memory bandwidth limitations and provides a better view on the computational performance.
§ RESULTS
We compare the runtime and memory for the four major CoDiPack tape implementations:
* Jacobian linear - Jacobian taping approach <cit.> with a linear index management <cit.>
* Jacobian reuse - Jacobian taping approach with a reuse index management <cit.> including copy optimizations
* primal linear - Primal value taping approach <cit.> with a linear index management
* primal reuse - Primal value taping approach with a reuse index management including copy optimizations
For the primal value taping approaches, we enable the specialized handling mentioned in Section <ref> (handling=on). Here, the primal values are recovered from the primal value vector of the tape and not stored in the external function data.
The memory results in Figure <ref> and <ref> show a significant reduction in the tape and overall memory when the loop is handled either by Enzyme or Tapenade. The tape memory in Figure <ref> does not include the data stored by the external functions, therefore the high water mark is better suited to judge the general savings in memory. The Jacobian tapes have a lower memory footprint after the loop handling, because it is more expensive for the primal value tapes to manage external function outputs. Memory for the primal values tapes can be improved by recovering the primal values from the tape. With this option the memory of the Jacobian tapes and primal value tapes are nearly the same.
The timing results in Figure <ref> show the recording time for the different configurations. Handling the 2D loop with Enzyme or Tapenade reduces the time by around 45 % which is mostly due to the reduction in tape memory. The memory bandwidth for writing the data to the tape is usually the limiting factor. The reversal times in Figure <ref> show the same picture and an improvement by 35 % can be seen. The degradation in case of the primal value tapes for handling=on is due to the recovery of the primal values from the tape. This is a random access on the primal value vector which can be quite slow.
The timing results for the multi-configuration in Figure <ref> and Figure <ref> show the same tendencies as the single results. For the recording, the timing reduction is around 32 %. All 24 cores are using the memory bandwidth for creating the external function which makes its limitation effect more pronounced in this configuration. The reversal has now a reduction of about 45 % in the timing which is actually better than in the single configuration. The memory bandwidth limitations are more pronounced for the large tapes with no external functions. Also, the structured data access of the generated derivative functions helps to recover more of the peak performance from the CPU.
§ CONCLUSION
In this study, we demonstrated how Enzyme-generated derivative functions can be included in CoDiPack. The validity of the approach was demonstrated on a numerical benchmark where the memory was reduced from 2.8 GB to 1.0 GB. The recording time was improved by 32 % and the reversal time by 45 % in the memory bandwidth-limited case. These results confirm the validity of the approach and further studies need to be conducted on real-world problems.
§ ACKNOWLEDGMENTS
We would like to thank William Moses for helping us to apply Enzyme to our test problem.
§ ENZYME FUNCTION GENERATION
Forward and reverse AD mode generation for an external function with Enzyme.
template<PrimalFunc func>
void enzymeDiff_d(double const* x, double const* x_d, size_t m,
double * y, double * y_d, size_t n,
ExternalFunctionUserData* d)
__enzyme_fwddiff(
(void*) func,
enzyme_dup, x, x_d,
enzyme_const, m,
enzyme_dup, y, y_d,
enzyme_const, n,
enzyme_const, d);
template<PrimalFunc func>
void enzymeDiff_b(double const* x, double* x_b, size_t m,
double const* y, double const* y_b, size_t n,
ExternalFunctionUserData* d)
__enzyme_autodiff(
(void*) func,
enzyme_dup, x, x_b,
enzyme_const, m,
enzyme_dup, y, y_b,
enzyme_const, n,
enzyme_const, d);
§ FULL ENZYMEEXTERNALFUNCTIONHELPER EXAMPLE
#include <codi.hpp>
#include <iostream>
using Real = codi::RealReverse;
using Tape = typename Real::Tape;
using BaseReal = typename Real::Real;
void vecPow2(const BaseReal* x, size_t m, BaseReal* y,
size_t n, codi::ExternalFunctionUserData* d)
for(size_t i = 0; i < m; i += 1)
y[i] = x[i] * x[i];
int main(int nargs, char** args)
Tape tape = Real::getTape();
size_t size = 10;
Real* x = new Real[size];
Real* y = new Real[size];
for(size_t i = 0; i < size; i += 1)
x[i] = BaseReal(i + 1);
tape.setActive();
for(size_t i = 0; i < size; i += 1) tape.registerInput(x[i]);
codi::EnzymeExternalFunctionHelper<codi::RealReverse> eh;
eh.template addToTape<vecPow2>(x, size, y, size);
for(size_t i = 0; i < size; i += 1) tape.registerOutput(y[i]);
tape.setPassive();
for(size_t i = 0; i < size; i += 1) y[i].setGradient(1.0);
tape.evaluate();
std::cout << "Solution y:" << std::endl;
for(size_t i = 0; i < size; i += 1)
std::cout << " " << i << " : " << y[i] << std::endl;
std::cout << "Adjoint x:" << std::endl;
for(size_t i = 0; i < size; i += 1)
std::cout << " " << i << " : " << x[i].gradient() << std::endl;
tape.reset();
return 0;
§ ENZYME HANDLING OF UPDATEFIELD
void updateFieldExtFunc(double const* x, size_t m, double* y, size_t n,
codi::ExternalFunctionUserData* d)
size_t xSize = d->getData<size_t>();
size_t ySize = d->getData<size_t>();
double dTbyDX = d->getData<double>();
double dTbyDX2 = d->getData<double>();
double oneOverR = d->getData<double>();
size_t inputGridSize = (xSize + 2) * (ySize + 2);
size_t outputGridSize = xSize * ySize;
double const* u_t = x[0];
double const* v_t = x + inputGridSize;
double* u_tp = y;
double* v_tp = y + outputGridSize;
for (int yPos = 0; yPos < ySize; yPos += 1)
for (int xPos = 0; xPos < xSize; xPos += 1)
size_t index_out = getArrayPos(xPos, yPos, xSize);
size_t index = getArrayPos(xPos + 1, yPos + 1, xSize + 2);
size_t index_xp = index + 1;
size_t index_xm = index - 1;
size_t index_yp = index + xSize + 2;
size_t index_ym = index - xSize - 2;
updateElement(u_tp, u_t, u_t, v_t, index_out, index, index_xp,
index_xm, index_yp, index_ym, dTbyDX, dTbyDX2,
oneOverR);
updateElement(v_tp, v_t, u_t, v_t, index_out, index, index_xp,
index_xm, index_yp, index_ym, dTbyDX, dTbyDX2,
oneOverR);
using Number = codi::RealReverse;
inline void updateFieldWithEnzyme(int xStart, int xEnd,
int yStart, int yEnd,
Number *u_tp, Number const* u_t,
Number *v_tp, Number const *v_t)
codi::EnzymeExternalFunctionHelper<Number> helper;
size_t xSize = xEnd - xStart + 1;
size_t ySize = yEnd - yStart + 1;
for (int yPos = yStart - 1; yPos <= yEnd + 1; yPos += 1)
for (int xPos = xStart - 1; xPos <= xEnd + 1; xPos += 1)
size_t index = getArrayPos(xPos, yPos, xSize);
helper.addInput(u_t[index]);
for (int yPos = yStart - 1; yPos <= yEnd + 1; yPos += 1)
for (int xPos = xStart - 1; xPos <= xEnd + 1; xPos += 1)
size_t index = getArrayPos(xPos, yPos, xSize);
helper.addInput(v_t[index]);
for (int yPos = yStart; yPos <= yEnd; yPos += 1)
for (int xPos = xStart; xPos <= xEnd; xPos += 1)
size_t index = getArrayPos(xPos, yPos, xSize);
helper.addOutput(u_tp[index]);
for (int yPos = yStart; yPos <= yEnd; yPos += 1)
for (int xPos = xStart; xPos <= xEnd; xPos += 1)
size_t index = getArrayPos(xPos, yPos, xSize);
helper.addOutput(v_tp[index]);
codi::ExternalFunctionUserData userData =
helper.getExternalFunctionUserData();
userData.addData(xSize);
userData.addData(ySize);
userData.addData(props.dTbyDX);
userData.addData(props.dTbyDX2);
userData.addData(props.oneOverR);
helper.template addToTape<updateFieldExtFunc>();
alphaurl
|
http://arxiv.org/abs/2307.04518v2 | 20230710123127 | On the Computational Modeling of Meaning: Embodied Cognition Intertwined with Emotion | [
"Casey Kennington"
] | cs.CL | [
"cs.CL"
] |
Efficient Match Pair Retrieval for Large-scale UAV Images via Graph Indexed Global Descriptor
San Jiang,
Yichen Ma,
Qingquan Li,
Wanshou Jiang,
Bingxuan Guo,
Lelin Li,
and Lizhe Wang
S. Jiang, Y. Ma, and L. Wang are with the School of Computer Science, China University of Geosciences, Wuhan 430074, China; S. Jiang is also with the Guangdong Laboratory of Artificial Intelligence and Digital Economy (Shenzhen), Shenzhen 518060, China, and with the Hubei Key Laboratory of Intelligent Geo-Information Processing, China University of Geosciences, Wuhan 430078, China. E-mail: [email protected], [email protected], [email protected]. (Corresponding author: Lizhe Wang)
Q. Li is with the College of Civil and Transportation Engineering, Shenzhen University, Shenzhen 518060, China, and also with the Guangdong Laboratory of Artificial Intelligence and Digital Economy (Shenzhen), Shenzhen 518060, China. E-mail: [email protected].
W. Jiang and B. Guo are with the State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan 430072, China. E-mail: [email protected], [email protected].
L. Li is with the Provincial Key Laboratory of Geo-information Engineering in Surveying, Mapping and Remote Sensing, Hunan University of Science
and Technology, Xiangtan 411201, China. E-mail: [email protected].
August 12, 2023
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This document chronicles this author's attempt to explore how words come to mean what they do, with a particular focus on child language acquisition and what that means for models of language understanding.[I say historical because I synthesize the ideas based on when I discovered them and how those ideas influenced my later thinking.] I explain the setting for child language learning, how embodiment—being able to perceive and enact in the world, including knowledge of concrete and abstract concepts—is crucial, and how emotion and cognition relate to each other and the language learning process. I end with what I think are some of the requirements for a language-learning agent that learns language in a setting similar to that of children. This paper can act as a potential guide for ongoing and future work in modeling language.
§ INTRODUCTION
How can machines understand language? is a question that many have asked, and represents an important facet of artificial intelligence. Large language models like ChatGPT seem to understand language, but as has been pointed out <cit.>, even large, powerful language models trained on huge amounts of data are likely missing key information to allow them to reach the depth of understanding that humans have. What information are they missing, and, perhaps more importantly, what information do they have that enables them to understand, to the degree that they do? Current computational models of semantic meaning can be broken down into three paradigms:
* distributional paradigms where meaning is derived from how words are used in text (i.e., the notion that the meaning of a word depends on the “company it keeps," following <cit.>)
* meaningfulness of language lies in the fact that it is about the world <cit.> and grounded paradigms are where aspects of the physical world are linked to language (i.e., the symbol grounding problem following <cit.>)
* formal paradigms where meaning is a logical form (e.g., first order logic as in <cit.>)
Figure <ref> shows examples of the three paradigms to computational semantics and the kinds of language phenomena that they model well. These paradigms to computational semantics have been applied in various models that represent remarkable progress in recent years. However, now that large language models and other AI models are more widely used, it is clear that there are limits to their `understanding` (if they fully understand, then why is prompt engineering necessary?) which has prompted some to claim that a full, unified model of computational semantics is only possible if it goes through the same language acquisition process that children do.
Even if computational models of language meaning do not need to learn in the same settings and progression that children do, it is useful to make an appeal to what is known about how children do learn language in order to guide current and future modeling efforts to enable models to have a more holistic understanding of language.[Why? Because understanding and misunderstanding is a vexing societal problem and a scientific understanding of how to acquire, represent, and apply language in a computational model will tell us something about what language is, which may help us overcome those vexing misunderstandings.]
This paper represents such an appeal. At least it represents this author's attempt in the past decade to synthesize what is known about child language acquisition and model semantics more holistically.
§ WHY DO CHILDREN SPEAK?
One goal of my research is to determine the setting and requirements for language learning; specifically I have been searching for environmental reinforcement signals that a child could use to know that they were aligning with language speakers, and what the parameters of the pre-linguistic (i.e., before the first real words—not just babbling—are uttered with some kind of intent) language learner might be.
In “Child's Talk: Learning to Learn Language" <cit.>, we get at some important basics (the following are quotes from Bruner): it is banal to say that infants (and, generally, humans) are social; they are geared to respond to the human face, voice, action, and gesture. Children seem to want to coordinate their actions with, or at the very least mimic the behavior that they see in conspecific entities: i.e., with their mothers. <cit.> noted that children have basic needs that might contribute to spoken interaction, namely that children aspire to affection and intimacy with their caregivers. Mothers are able to track a child's progress and act accordingly.
Mothers seem to follow the Gricean maxims of quantity, quality, relation, and manner <cit.>. The initial cognitive endowment appears to be that it is goal-driven activity, is social and communicative—self-propelled and self-rewarding—constrained, ordered, systematic, familiar, often referring, and surprisingly abstract (as opposed to concrete, which is usually what is assumed when considering that children first refer to physical objects).
A book more directly related to what I was looking for was How Children Learn to Learn Language by Lorraine McCune (), who claims that the basis of meaning is grounded in embodiment, something I had not really before considered. This way of thinking was quite different from the general NLP thesis that meaning can be derived from text alone, with word embeddings dominating the “semantic" side of the field. Text isn't embodied, and children don't learn language via the medium of text. In fact, if words are to be learned, then children must attend to physical objects (including self and others), and one thing that makes objects salient to children is the fact that they move <cit.>.
McCune makes the case (synthesizing other work) that linguistic patterns and order emerge not necessarily with explicit instructions—language learning doesn't requite a curriculum, though caregivers use simple words and phrases at first. Children are interested in novelty which gives them sensitivities in information coming through all of the sensory inputs and internally (e.g., proprioperception) from their own bodies. McCune noted four stages that children go through as they learn language (p.27):
* Sensitivity to expressiveness (i.e., movement and sound)
* Transcendence of expressive qualities and knowing attitude (i.e., the child recognizes that actions and sounds are communicative)
* Denotative reference and semantics correspondence (i.e., children begin to refer to objects and learn their designations)
* Shared perceptual and representational settings (i.e., children learn language in a shared space where they and caregivers directly perceive objects and each other)
To learn language, eyes are important; children can follow paternal line of regard (i.e., triangulate what another person is looking at) by just seven months. Children somehow know that the eyes are an important modality of attention and they wonder why the eyes of others aren't pointed at themselves because the attention of others is something that children seem to innately seek. Reference to visually present objects (the subject of my PhD dissertation) is an important step in the developmental process, which coincides with the development of theory-of-mind (i.e., the child comes to the realization that they are an individual separate and distinct from others, and they allow others the same endowment of distinct individual identity and frame of reference to the world). But reference doesn't come first; there are other pre-linguistic parameters that must be in place before reference to visual objects can begin.
My primary takeaway from McCune's work was that children are motivated, intrinsically, to interact with other human beings and that language learning likely would not take place without interaction, nor without the motivation to interact. Other work reinforces spoken interaction <cit.>, and another reinforces that there is no overt supervision signal; children just need to explore and observe to find patterns and regularities, and once a regularity is learned, exploit it to learn more abstract regularities <cit.>.
§.§ Nature or Nurture?
Part of my quest to understand the settings and parameters for language learning has meant taking a stance on the nature (à la Chomsky) vs. nurture debate (spoiler alert: both are required, but with some nuance). It is clear that there is some degree of cognitive scaffolding that uniquely affords humans the ability to think and talk about abstract ideas using speech and other communicative mediums. Furthermore, it is known that pre-linguistic infants possess “highly developed perceptual mechanisms for the perception of speech" <cit.>. It is also clear that a fairly substantial degree of linguistic exposure is necessary for children to learn language, and by language I don't just mean syntax. Important to this debate was an observation made in <cit.> that learning is not the antithesis of innateness, but one of its important products. <cit.> makes a strong case that language requires experience and that languages are socially constructed even between the child language learner and the parent.
It is known that comprehension of speech occurs ahead of production of speech, and that visual, physical context is critical to learning language. An important takeaway from that book is that adults are not simply performing random behaviors, they are performing intentful behaviors, and children pick up on those intents. Understanding intent first seems to be a precursor to language.
For example, a child who is old enough to make use of hands to reach objects understands the intent to make an effort to reach for an object, so when that child sees another person reaching for an object there is an understanding of intent behind that other person (an important part of theory-of-mind); i.e., the person wants to grab the object. Sounds that come from human mouths that accompany those kinds of actions form the basis for language because language builds on understanding of intent. That, once again, simply means that the child learner requires a body to make utterances, to enact intentful behaviors, and to experience them personally in order to recognize them in others.
§ MODELING CHALLENGES
If embodiment is necessary, does it matter what kind of body? Searching for an embodied setting led me to explore robots as a body for my computational model, but I had no experience with robots. I did not want to build one. I was, however, an interested consumer that wanted to put my incremental (i.e., processes at the word level) dialogue system onto a robot platform so the physical interaction could take place. I opted to purchase several Anki Cozmo robots because they were affordable, small, and had a Python SDK that gave me access to sensors and control over actions.
Learning how to use the robot took longer than it should have, requiring branching into the field of human-robot interaction (HRI) because we had to establish that the Cozmo robot was the right one for the job of first language learning, that people would actually treat it like a child and did not have adult-level cognitive capabilities <cit.>. There were technical challenges in this regard; getting our spoken dialogue systems to play nicely with the robot SDK took a lot of effort and we knew that the model of semantics that we had espoused wasn't quite right to work well with the robot's sensors. The importance of good technical infrastructure cannot be understated, yet it takes up a lot of time because without it we cannot do productive research.
§.§ Objects in the World
How Children Learn the Meanings of Words by Paul Bloom <cit.> focuses somewhat more on the child who was ready to learn words and what those words might mean. Some words refer to objects, so what do children assume about those objects? Bloom mentioned Spekle objects with four important principles:
* cohesion — objects hold together
* continuity — objects remain even if they disappear from view
* solidity — objects are solid
* contact — objects can interact with each other including people
This concept is important because children interact with objects and people before they begin referring to objects using words, which requires that they have some kind of understanding of those objects; at least how they feel, their potential affordances (e.g., a ball is roll-able, a box can hold objects) which is what is potentially grounded into when the children begin to learn reference words. This puts reference (and affordances) in a central position to meaning, at least when children are learning words that refer to concrete, physical objects. Bloom states that if reference is central to meaning, then meaning is not determined by mental representations. This is an important point because that affects the model (whatever a mental representation is).
Similar to Bloom, <cit.> looked at the literature on early word learning in children. Children learn words slowly; if children could learn words quickly, we would not see a strong correlation between how much parents talk to their young children and a child's vocabulary scores, but we do (nod to Tomasello). There is such a thing as fast-mapping in older (though still young) children, but the initial words are not so easy.
Production (i.e., how a word is used / uttered) is the ultimate demonstration that a child has learned a word. Interaction requires speech, and speech unfolds phonetically over time, so listeners must interpret words incrementally, one syllable at a time. <cit.> is an entire book on this subject with the thesis that timing posits (rather, builds upon) the notion that the give-and-take and timing of that give-and-take are foundational before other cognitive development can take place. Not just spoken language; two objects can't occupy the same space so there is a give-and-take in the use of spaces and a give-and-take in the use of things like speech as a communication medium because we can only attend to one voice at a time, and because speech is manifested as compressed air within a specific space, so only one person can talk at a time if anyone is to be understood. This is either a handy thing because human attention is limited to one thing at a time, or it might actually be one of the things that limits human attention to one thing at a time.
Relatedly, one important point that is the main thesis of <cit.> is that our cognitive functions are housed in a body that lives in time and a kind of space with specific degrees of freedom (e.g., three axes of movement are allowed in that space) and our cognitive machinery builds its understanding (and as Johnson and Lakoff observed, metaphors <cit.>) upon the spatial foundation of language and cognition.
Imagine communicating without prepositions, or the information encoded in action verbs that connote activities that occur in time and space. This is where mothers shine: mothers take note of the space-time constraints then have simulated dialogues with their babies; when a child doesn't respond to a turn, the mother still allows the duration of what might have been a turn to elapse before taking the dialogue floor again. Mothers' high responsivity facilitates their infants' cognitive development. This is the literal cradle of social adaptation; cognition and cognitive development are inseparable from social adaptation—but it must be interactive in that cognitive beings are participating in the social interaction.
Digging deeper into child psychology, <cit.> explain a few important things. First, that parents repeat what children say, they don't overlap their speech with children (i.e., the dialogue has very easy-to-distinguish turns), the speech is simplified with primary content at the end of what is being said, and parents repeat what children mean by rephrasing in a grammatically correct way. Thus parents assume the child has an egocentric frame of reference of the world (i.e., they can only take on their own frame of reference—they haven't discovered that others have their own frame of reference). Parents keep a level of complexity just ahead the child which gives the child enough novelty, thereby holding attention and learning.
Taken together, physical, co-located interaction between parent and child is key. Children are motivated to interact, and caregivers assume an egocentric frame of reference for the children, meaning that parents don't refer to the objects, they often name the objects that the children are already attending. Learning that was helpful for me because those are some of the parameters that need to be in place for a computational model:
* it must be in an interactive setting
* the learner should probably be embodied (at the very least it should have sensory input)
* because of the ego-centricity we could assume that when objects are referred, it is because they are already salient to the child
Which computational model can fulfill those requirements?
§.§ Deep Learning and Transformer Language Models
Deep neural networks are the mainstay of most NLP tasks and the latest architectures of the time led to a new language model that dramatically altered everything. <cit.> introduced the attention mechanism and <cit.> made attention and transformer architectures work in NLP as a new way to use a pre-trained language model called BERT to do anything. The caveat was that it was trained on large amounts of text. Broad-sweeping claims followed: BERT and more powerful derivatives were at the basis for artificial general intelligence, etc., etc. That caused me to raise an eyebrow because throwing text from books and websites at a model and using a learning regime of guess-the-word within a sentence wasn't anywhere close to how children learn language, if all of those books I had been reading about child language learning (and my own experience with my children) were to be believed.
But do we really need to be concerned with mimicking exactly how humans learn language? After all, airplanes fly without flapping their wings. Two responses to that: first, the reason deep learning works so well is because it is bio-inspired, so there is something potentially useful about trying to mimic biological processes. Second, language is an ability that is so uniquely human that understanding it means understanding how humans acquire and use language.
Thankfully, I wasn't the only one who had reservations about BERT and derivatives thereof. <cit.> highlights some of the important reservations that many in the field have with assigning so much meaning and understanding to BERT-like transformer language models. Others have followed with their own skepticisms since phrases like large language models like ChatGPT became part of everyday vernacular.
§ MEANING AND EMBODIMENT
§.§ What is Context?
What possibly annoyed me most in my investigations was the claim that the language models were following a Wittgenstein view of meaning in language, that meaning is derived from how it is used in a context. What context? The assumption is lexical context; i.e., words used in the textual context of other words. But there is also physical context, and I believe that is more likely what Wittgenstein meant. I picked up his Philosophical Investigations <cit.> in 2019 and what luck our library had it in English and its original German. I read both in tandem, and while I found the translation reliable (mostly), it bothered me that the accepted interpretation of Wittgenstein's stance on language meaning was text-centric, or at least context only meant what was spoken or written previously. This was late Wittgenstein when he thought he had settled language as a more formal system, but then spent time with children and (I conjecture) realized that the child mind is not the same as the adult mind.
More evidence that he meant that meaning comes from physical context: “There is a gulf between an order and its execution. It has to be filled by the act of understanding" (1.431) and not disconnected from the body (1.339). I interpret this to mean that meaning requires action, or a body to act in, because meaning is grounded in bodily movement. The word throw, for example, isn't just an idea and it's not just something we see someone else do, we have muscle memory or throwing that is part of the meaning of throw. He also brings up color and shape (1.72-74), that words refer to objects which themselves have affordances (1.11), and mentions that language use is first in reference to deictic (i.e. pointing) gestures. In other words, at first, language is grounded in the physical world. Only after a conceptual foundation of concrete concepts do we get to abstract language games (1.270+) and thinking about thought (1.428); i.e., use language to construct meanings of abstract words and abstract thought only after concrete scaffolding. In contrast, language models were only focusing on the last bit: use a language game of “guess the word that I randomly covered" to think about abstract thought, but a model distributed in text, all words are treated as if they are abstract. This idea was hard to convey in some of my (rejected) papers. Wittgenstein did explore how words come to arrive at a shared meeting between speakers that don't have observable thoughts, which is what language games are for, an observation explored deeply in <cit.>.
In any case, language models are here and making an enormous splash and winning all of the benchmarks. If you aren't using language models in your research then at least one reviewer will use that as a justification for rejecting your paper. That's not to say that transformer language models don't have merit—they really do—but try using one out of the box on a robot, show the robot an apple, then ask if the apple is red.[Though there is now a trend in multimodal language models that at least bring the visual modality into language, see <cit.> for a review.] The language model doesn't know anything about redness, it only knows that red is a color and might be able to list of some objects that are typically red. That is changing with visual and other multimodal language models, but as observed by <cit.>, the language “learning" progression made in the NLP community starting with transformer language models then working towards more embodied notions of meaning is the opposite direction of how children learn language. Clearly there is top-down processing that happens in cognitive processes and they are in play early on in a child's life, but large language models were completely lacking anything bottom up from the pyhsical world.
§.§ Embodied Cognition
Johnson, along with George Lakoff, has been an early proponent of embodied cognition and carries forward the research of the time in his later book <cit.>. <cit.> puts language and bodies together. Both make a strong claim that the fact that our bodies are unique and distinct from other bodies, allowing interaction to take place, and putting language at the social level between linguistic bodies. Moreover, the categorical gap between sensorimotor life and the life of language is not only big, it is largely uncharted scientifically.
We cannot separate bodies from what they do, making people with bodies agents (in that they act), where agency is the active regulation of tensions between different negative tendencies; the actions of the agent are guided by positive norms that emerge dialectically out of opposing negative ones. Di Paolo of course mentions reference and social interaction, but the main thrust is that without the precarious materiality of bodies, there would be no meaning and no minds (p.110).
The idea that bodies are important to meaning is not new. <cit.> predicted what the world of AI might look like in the future (i.e., today, 20 years after its publication) and embodiment was not out of the equation. Brooks mentions Kismet, a simple robot that could respond to stimuli in ways that humans interpreted as somewhat intelligible. But, as the author admits, what Kismet cannot do is actually understand what is said to it. Nor can it say anything meaningful, and it turns out that neither of these restrictions seem to be much of an impediment to good conversation (sorry, chatbots).
Moreover, according to Brooks, researchers are operating in an underconstrained environment, and as they follow up interesting research ideas, they are tempted—and succumb—to make their abstract world more interesting for their research ideas, rather than being faithful to the reality of the physical world. This is exactly the issue I take with language models that are perfectly satisfied with deriving meaning through abstract text–partly because the datasets are easier to come by than the painful collection of often stilted spoken dialogues accompanied by a recording of the physical environment in which the spoken dialogue took place.
Embodied cognition is not without its critics, and there are plenty of theories of cognition that don't require a body. <cit.> in Meaning in the Brain takes a step back and looks at meaning from a different perspective: meaning is not a given, but rather the result of a constructive process that uses knowledge to make sense of sensory signals. So there is sense information, but the mechanisms in the brain aren't just reading sensory input in and finding patterns; rather, the brain is actively trying to find meaning. That's partly because prediction of possible pathways of conversation is a fundamental process of what the brain is doing during conversation, and that often drives the meaning of a given situation, linguistic or not. Embodied perceptual activations are not required for representing input's meaning, which is what makes language so ultimately useful. That of course only counts at the adult level, but each language learner needs to arrive at that point individually. Baggio does take that into consideration, noting that first words, in particular common nouns and their meanings, are learned in the first year of life, in social contexts where coordination between the infant and caregiver is generally the primary goal of interaction.
In the first two to three years of life, children learn largely by observing or imitating adults. Furthermore, considering the world outside just a single brain, Baggio quotes Miller's Law: In order to understand what another person is saying, you must assume it is true and try to imagine what it could be true of (Grice's maxims apply here, mostly the maxim of quality). Children must do that or they might not learn to speak at all. They only learn later about lies and manipulation—unfortunate aspects of the human condition—but no one would learn language if children assumed a-priori that nothing was true.
§.§ Which Body?
If embodiment is required for holistic computational modeling of language, then which body? Virtual agents offer a kind of body that could be an important stepping stone. They can be made to look human, which may also be important. The main drawback for me in my quest for holistic semantics was the fact that virtual agents exist in a virtual world. What is required is a body that can enact in the world that humans share with each other. That left robots.
<cit.> and more recently <cit.> bring some clarity to the stance that embodiment is crucial to cognitive development of minds. Like humans, robots are a kind of body that don't just observe the world; a robot is an autonomous physical device, of any shape or form, that can sense and perform actions in its working environment <cit.> (p.20). That means that the robot has to be able to act, at least to some degree, where it is physically located. Humans have the same limitation, though of course humans can control things remotely using technology, but our own limbs are limited to what they can reach here and now.
<cit.> makes the case that the Turing Test is not a proper test of intelligence because words and concepts, including the most abstract, must have their meanings ultimately grounded in sensory-motor experience. That's not a reductionist account that all meaning is eventually grounded; it's more of an account of a proper progression: one cannot come to learn the meaning (connotation) of abstract ideas like democracy without the vocabulary required to define democracy, and so on until one reaches the point where words are not learned by other words. Rather, they are concrete words that denote physical objects or object attributes. <cit.> showed how recursively considering words that define other words in a dictionary eventually lead to a core subset of words that all other words are defined upon.
Modern robotics has shown how important embodiment is <cit.>. Without a sensory-rich body, perception (as we know it) is impossible. And enaction (i.e., acting intentfully and not just sensing) is also vital—our actions are entangled with our thoughts just as much as perception. The life process, the life cycle of the individual, cannot be separated from embodied cognition. This is the difference between biological brains and computer brains. Indeed, <cit.>, required reading for anyone interested in first language acquisition, mentions that children seem to act out things as they are learning words, like opening and closing doors as they learn words open and close. Transformer models definitely don't do that, resulting in a meaningful lack of meaning.
§ EMOTION AND LANGUAGE
Though not strictly a book about child language development, <cit.> reported a longitudinal study of a group of people across decades to track their development from birth to adulthood that gives a broader picture of how humans develop in an individual, familial, and societal context. One of the main themes of the book is behavior (since behavior is something that can be observed), and what behavior means to the organization of an individual. How does this relate to language?
Central aspects of individual organization originate in the organization of early relationship. Language is part of that organization since it is a method of communication that maintains, fosters, or harms those relationships. Another is the main theoretical thrust of the book: that organization is the fundamental feature of behavior—i.e., language is part of the organizational structure itself. Organization is revealed in the interplay of emotion, cognition, and social behavior; development is defined by changes in organization of behavior over time, and organization of behavior is central to defining individual differences.
If development of an individual human means that they are organizing their behaviors (emotion, cognition, and social behavior and the interplay between emotion, cognition, and social behavior) then language development—which is an organizing behavior—plays a central role in the organization of emotion, cognition, and social behavior.
The idea that language plays a role in the organization of social behavior was clearly laid out in <cit.>, along with its accompanying idea that social behavior in turn plays a role in the organization of language because people have to coordinate what they mean when they speak with each other. Furthermore, that cognition plays a role in language and that language plays a role in cognition is well-established—language and cognition are often considered one and the same. But what about emotion? If emotion plays a role in the organization of behavior, and if emotion has a tight interplay with cognition and social behavior, what does emotion have to do with language?
Like many, I considered emotion to be part of the human experience, but clearly separate and distinct from cognition.[This is perhaps in part due to my affinity for Commander Data on Star Trek: The Next Generation who was a conscious and highly intelligent android, yet emotionless.] In fact, emotion was in my view often a hindrance to true linguistic understanding because emotion colors understanding in potentially the “wrong" way. The more we could separate emotion from meaning the better. That researchers were trying to model the ability to recover emotional content from text (e.g., a short post on a social media site) did seem useful, but the goal there was utilitarian, not to uncover the meaning of language.
My stance on emotion began to change when I took on a master's student, David, to whom I handed the Cozmo robot and tasked him with putting everything together we knew about language, meaning, spoken dialogue, and placing the robot in a setting where it could learn language from people without knowing any language a priori. After considering the task, he asked a question I had not considered (students tend to do that): “If we bring in people to talk to Cozmo, why would they care to help it learn language?" I don't think I grasped the question fully at the time. It partially meant that the science we were trying to advance was different from other science in that we weren't just observing people behaving, we were asking them to behave in a way such that they might help this little robot learn some words; i.e., we were asking them to be caregivers for an hour. David was concerned that paying participants for an hour of their time wasn't enough because there was no connection between them and the robot to care that the robot actually learned anything.
All of the literature I had read up until that point about what mothers do to foster development, particularly language development, backed up that concern. The robot had no mother. What we possibly lacked from the participants was “buy-in" to the needs of the robot to learn language. One way to potentially convince people to buy-in was to make the robot display behaviors that would motivate the participants to buy-in in a way that capitalized on a general human decency to help others.[The ethical implications of this were always a concern to us.] But the displays had to be age-appropriate for the robot; it was supposed to be like a pre-lingusitic child, after all, and what kinds of behaviors can pre-linguistic children engage in to capitalize on the decency of others to help? Well, children smile and they cry—they display emotion. David put in the due-diligence in the relevant literature and found that a number of important behaviors had emotional underpinnings that could help facilitate language learning; for example, confusion and curiosity.
That put us in a difficult position that we had been in before with human perceptions of the robot's age and cognitive level: what behaviors could we make the robot do in order to make people think that the robot was displaying some kind of emotional state? With Cozmo we struck gold because Cozmo had nearly 1,000 short, pre-defined behaviors that we could easily invoke and some of them were designed to have emotion content (e.g., the robot smiles, makes a “happy" noise, and moves its lift was meant to display happiness). David painstakingly video/audio-recorded all of the behaviors and put the recordings on a crowd-sourcing website, asking people to rate the behaviors for their emotional qualities. That work led to a model of emotion recognition, not from humans (!) but inferring what emotion people would attribute to a robot behavior based on the behavior itself (i.e., the movements, face, and sounds from the robot). That led to <cit.>, which was only the beginning of what we thought was a temporary, minor detour down the path of emotion and what it might have to do with language.
§.§ Concrete Affect, Abstract Emotion
My original hypothesis about emotion and language was that because emotion exists and is used to display information about the state of a child before language is learned, then emotion must be something that, like other perceptual modalities, language grounds into (i.e., the modality itself is part of the meaning) as language is learned. To find out more about emotion in general (not just how it relates to language), I picked up <cit.>, a dense but thorough read, and had to step back—way back—from what I thought I understood about emotion. That led me to read a few papers on the subject of language and emotion with a more open mind; these were neurosicence and psychology papers (the NLP community was only interested in how to infer the emotion of the writer of a piece of text, not in how it relates to meaning).
Two papers, both originating from Gabriella Vigliocco, really changed my understanding because they made a strong case that abstract words are more directly tied to emotion than concrete words <cit.>. This agreed, it seemed, with <cit.> which synthesizes the latest emotion research. At the very least, I came to the understanding of the difference between emotion and affect (most people use the two terms interchangeably) in that emotion is tied to the linguistic and cognitive system making emotions to a large degree socially constructed, whereas affect is more basic; grounded in embodiment.
§.§ Emotion and Cognition
A Child's Path to Spoken Langage by John L. Lock <cit.> makes some important arguments vis-à-vis cognition and emotion. As has been noted by others (some of them citing Locke's work), access to the vocalizations of one's species is simply not enough. Furthermore, an inclination to imitate and all the good communicative intentions in the world are insufficient for being a speaker of a language. Rather, the availability of appropriately interactive tutors are part of the story. That means putting an agent (or computational model) in a place where it can observe language, be it text or even referring expressions made to visually present objects, does not bring the child to language capabilities as much as participatory interaction. But interaction doesn't take place with just anyone—language is developmentally additive in the sense that the multiplicity of cues that trip off phonetic categories are piled on top of the prosodic, affective, and speaker-identifying cues that form the infralinguistic (i.e., non-linguistic) core of our vocal messages. It seems that infants need to know and have experience with the identity and intentions of those who are speaking to them. This could be because mothers imitate children 90% of the time, not the other way around.
Reinforcing some of the themes mentioned above, Locke further argues that language development proceeds from the general to the specific, and complex structures evolve by differentiation of a larger entity into smaller parts or functions (words to phonemes, phrases with prosody to words). From its earliest opportunity, the infant seeks out the particular kinds of stimulation that it enjoys and that its brain may need in order to develop maximally. Young humans express more interest in the eyes than in any other region of the face, and it seems that the human infant is largely preadapted to indexical and affective communication. Even little monkeys and apes do not “while away the hours in idle vocalization;" quite the contrary, but little humans babble.
An anecdote illustrates this. When my daughter was learning to speak, her word for all non-flying animals was cow because we lived near a farm and she saw cows a lot. Only later did she use size to distinguish between big animals and small ones, thus the concept dog emerged for the latter category. As she learned more words for more specific species, she picked up on the details that distinguished them.
Studies reveal that it is not just the sound of speech that sets infants to vocalizing or reinforces them for doing so: the person doing the speaking must be physically present and it may help if the speaker is visibly looking at the child; vocal imitation may occur more commonly when the baby can see the person who is talking or see a person while there is talking. It's worth noting research that sorts children into two cognitive camps: some children are more referential so their first words largely refer to physical objects, while other children are more expressive so their first words are more egocentric about their own feelings and needs.
In <cit.>, an entire chapter is dedicated to emotion and language development. Effectance (i.e., motivation to act and interact) and affect play several major roles in human communication. Certainly they expand the infant's capacity for intelligent behavior by pushing it to explore and to interact with the people and objects in its environment. Affective displays provide parents with a basis for social responding and cues they may use in adjusting the psychological and physical care of their infant. The experience of emotion fills infants with energies that are dissipated by behaviors (such as squealing), which are by their very nature communicative (p.328). Piaget said that affect plays an essential role in the functioning of intelligence. Without affect there would be no interest, no need, no motivation; and, consequently, questions or problems would never be posed, and there would be no intelligence. Affectivity is a necessary condition in the constitution of intelligence.
Locke also cites Sroufe and Waters who worked on the longitudinal study mentioned above <cit.> early on in the longitudinal project where they note that that cognitive advances “promote exploration, social development, and the differentiation of affect; and affective-social growth leads cognitive development [...] neither the cognitive nor the affective system can be considered dominant or more basic than the other; they are inseperable manifestations of the same integrated process [...] It is as valid to say that cognition is in the service of affect as to say that affect reflects cognitive processes." Moreover, in the real speech of sophisticated speakers, where both linguistic content and vocal affect are present, one type of cue does not preempt the other, and for speech to work this must be the case.
Listeners must know both what the speaker is saying and what he intends by saying it. Speakers duplexly pick up information about the linguistic content and the speaker affect because the cues to these things are of different sorts and are processed by different brain mechanisms. Thus, according to Locke, the meaning of an utterance is in the linguistic content, but the intent of the speaker who made the utterance is in the affect and emotion. In fact, children are adept at reading intents of others via affect and emotion, before they can even speak or really understand words.
§.§ Modeling Emotion
Taken together, the above discussion means that the separation of language from emotion in computational models is going to lead to something that is only an approximation of what a language model should encode, if any claim is to be made that a model has any degree of semantic meaning. However, emotion is not just another modality like vision through a camera or haptic sensations through a robotic hand; emotion is communicative on its own, albeit with limited (but important) social signals, pre-linguistic in that it helps scaffold the language learning process especially early on, and emotion is later intertwined with cognitive development and abstract linguistic meaning.
How, then, could we represent affect and/or emotion computationally? As has been the case with deriving meaning from text, we can't also derive emotion from text. Pulvermüller <cit.> I think gives us a hint: the only way we can arrive at a representation of emotion that we could possibly make use of computationally is if we tie emotion to behavior, which is how affect and emotion are signalled between humans. That means we need something to produce that behavior—we've already established that embodiment and interaction are crucial, and that robots are the only computational devices that fulfill the requirements of embodiment because they can act in the physical world. Thus we need humans to watch robots and record their appraisals of robot behaviors for emotional content, then link the behaviors to the emotion. That's only an approximation of emotion through the back door, but it's a start.
§ MEANING AND THE BRAIN
Explaining what was missing from computational models of language (like large language models) was easy when I explained the difference between concrete and abstract words <cit.>. It's no dichotomy either; some concepts are very concrete in that they exist physically (chair), a bit less concrete in that they exist physically but also have abstract properties that make them what they are (farm, city), and some words are abstract in that there is no physical denotation where the meanings are built upon meanings of other concepts (democracy).
From my readings above, learning that concreteness and abstractness play with emotion in different ways was additional evidence that the concreteness/abstractness dimension of language was something worth my attention. Neuroscience literature further showed that abstract and concrete concepts have different represntational frameworks in the brain <cit.>. However, I found neuroscience literature difficult to digest because a lot of the terminology.
A book by Iain McGilchrist helped me grasp some of the neuroscience terminology <cit.>, and I found that it fed my obsession for the concrete-abstract dimension of language. The main thesis is that the left and right brain hemispheres are, while very similar in function, have some notable differences with many important implicationss. Of course, I was primarily interested in how those differences might affect the language acquisition process. The following largely deal with the concrete-abstract nature of the hemispheres and the role emotion plays:
* The left hemisphere is the hemisphere of abstraction, which, as the word itself tells us, is the process of wresting things from their context. Thus the right hemisphere does have a vocabulary: it certainly has a lexicon of concrete nouns and imageable words which it shares with the left hemisphere; but, more than that, perceptual links between words are made primarily by the right hemisphere (p.50). In general, then, the left hemisphere’s tendency is to classify, where the right hemisphere’s is to identify individuals (p.52). It has been suggested that our concepts are determined by the language that we speak (the Sapir–Whorf hypothesis). However, this is no more than a half or quarter truth. Children certainly often get the concept first and then quickly learn the word to describe it, which is the wrong way round from the Sapir–Whorf point of view. Moreover there is evidence that five-month-old babies have a concept, to do with tightness of fit, which they subsequently lose if their native language does not embody the same concept (p.110).
* ... the right hemisphere’s interest in language lies in all the things that help to take it beyond the limiting effects of denotation to connotation: it acknowledges the importance of ambiguity. It therefore is virtually silent, relatively shifting and uncertain, where the left hemisphere, by contrast, may be unreasonably, even stubbornly, convinced of its own correctness (p.80).
* `emotion binds together virtually every type of information the brain can encode... [it is] part of the glue that holds the whole system together’ (p.88; quoting Douglas Watt)
* To recapitulate, then: language originates as an embodied expression of emotion, that is communicated by one individual `inhabiting’ the body, and therefore the emotional world, of another; a bodily skill, further, that is acquired by each of us through imitation, by the emotional identification and intuitive harmonisation of the bodily states of the one who learns with the one from whom it is learnt; a skill moreover that originates in the brain as an analogue of bodily movement, and involves the same processes, and even the same brain areas, as certain highly expressive gestures, as well as involving neurones (mirror neurones) that are activated equally when we carry out an action and when we see another carry it out (so that in the process we can almost literally be said to share one another’s bodily experience and inhabit one another’s bodies) [...] which binds us together as physically embodied beings through a form of extended body language that is emotionally compelling across a large number of individuals within the group (p.122).
There are other excerpts from the book that related to language, but these suffice for my purposes here. My primary takeaway is that the concrete-abstract dimension of language is one of the most fundamental aspects of language itself; certainly also of cognition and emotion. In fact, the neurological hardware upon which human thought takes places has, it seems, split the hemispheres to capitalize on the interplay between concreteness and abstractness. Pre-linguistic indeed. As for computational models, large language models like ChatGPT that are trained only on text are purely left-brain models.
§.§ Implications
Concreteness and abstractness are well studied in some fields, but taken for granted in NLP research. The concreteness-abstractness divide can help us understand meaning and the assumptions we are making in our models (e.g., that large language models trained on text are purely abstract). Moreover, cognition (which is often equated with language and visa-versa) doesn't stand on its own: cognition needs emotion.
Incorporating emotion into the language learning process is an additional challenging requirement of putting a computational agent into a setting that is similar to the setting that children are in when they learn their first words. The requirements are, thus far, that the child-like agent must:
* be embodied—the agent must be able to act in its environment and potentially manipulate objects and the language model needs access to internal embodied states and sensory modalities of the external world
* interact using speech—he agent must use speech as the primary modality for acquiring language, partly because prosody helps carry affective information, and be motivated to interact with others
* be physically co-located with language speakers—the agent must be able to visually and auditorially perceive the person(s) that it is learning language from, partly because physical behaviors carry emotional information
* distinguish concrete and abstract concepts—be able to learn concrete concepts that denote physical individual things, but also be able to use those concrete concepts abstractly and be able to learn abstract concepts from existing knowledge
* use affect and emotion—the agent must use affective displays to facilitate language learning in the early stages, language must ground into affect and emotion concepts are acquired in lock-step with cognitive and abstract language development
There are certainly other aspects that I did not explore or find in my years-long search for relevant literature, for example theory-of-mind needs to be modeled and recent work is exploring theory-of-mind deeper, but the entire notion of theory-of-mind needs to be ironed out to the degree that it could be modeled. Likewise, play is an important part of cognitive development in part because it gives children a chance to enact meaning with their bodies and make mistakes with language as they are learning it, and to discover affordances of objects in the world. But I think that theory-of-mind, affordance, and play are, like language, intertwined with emotion.
Acknowledgements I would like to thank Patty Kennington-Rooks and Vanessa Christensen for helpful and detailed feedback, as well as fruitful discussion. I would also like to thank members of the Speech, Language, and Interactive Machines research group at Boise State University for helping to fine-tune some of the ideas.
|
http://arxiv.org/abs/2307.05716v1 | 20230708074507 | Hierarchical defect-induced condensation in active nematics | [
"Timo Krüger",
"Ivan Maryshev",
"Erwin Frey"
] | cond-mat.soft | [
"cond-mat.soft"
] |
a,*]Timo Krüger
a,*]Ivan Maryshev
a,b,1]Erwin Frey
[a]Arnold Sommerfeld Center for Theoretical Physics (ASC) and Center for NanoScience (CeNS), Department of Physics, Ludwig-Maximilians-Universität München,
Theresienstrasse 37, 80333 Munich, Germany
[b]Max Planck School Matter to Life, Hofgartenstraße 8, 80539 Munich, Germany
[*]T.K. and I.M. contributed equally to this work.
[1]Corresponding author: [email protected]
Hierarchical defect-induced condensation in active nematics
[
August 12, 2023
===========================================================
Topological defects play a central role in the formation and organization of various biological systems.
Historically, such nonequilibrium defects have been mainly studied in the context of homogeneous active nematics.
Phase-separated systems, in turn, are known to form dense and dynamic nematic bands, but typically lack topological defects.
In this paper, we use agent-based simulations of weakly aligning, self-propelled polymers and demonstrate that contrary to the existing paradigm phase-separated active nematics form -1/2 defects. Moreover, these defects, emerging due to interactions among dense nematic bands, constitute a novel second-order collective state. We investigate the morphology of defects in detail and find that their cores correspond to a strong increase in density, associated with a condensation of nematic fluxes. Unlike their analogs in homogeneous systems, such condensed defects form and decay in a different way and do not involve positively charged partners.
We additionally observe and characterize lateral arc-like structures that separate from a band's bulk and move in transverse direction.
We show that the key control parameters defining the route from stable bands to the coexistence of dynamic lanes and defects are the total density of particles and their path persistence length.
We introduce a hydrodynamic theory that qualitatively recapitulates all the main features of the agent-based model, and use it to show that the emergence of both defects and arcs can be attributed to the same anisotropic active fluxes.
Finally, we present a way to artificially engineer and position defects, and speculate about experimental verification of the provided model.
§ INTRODUCTION
The characteristic features of a nematic liquid crystal are the emergence of long-range orientational order and the occurrence of half-integer topological defects, which, however, are annealed at thermodynamic equilibrium <cit.>.
The dynamics of its nonequilibrium counterpart, an active nematic <cit.>, is in contrast governed by the persistent creation and annihilation of pairs of topological defects with opposite charges, leading to a dynamic steady state commonly referred to as active turbulence <cit.>.
Dense gel-like mixtures of microtubules (cytoskeletal filaments) and kinesins (molecular motors) that cause relative sliding between microtubules have become experimental platforms for studying the formation, dynamics, and annihilation of these toplogical defects <cit.>.
The observed complex defect dynamics have been investigated using hydrodynamic theories <cit.>.
The basic insight derived from such studies is that topological defects constantly generate active flow in momentum-conserving systems <cit.> or active flux in momentum non-conserving systems <cit.>.
Another experimental model system for active nematics is the actomyosin motility assay, in which actin filaments actively glide over a lawn of myosin motor proteins, performing a persistent random walk with constant speed <cit.>.
These systems exhibit phase separation into dense polar-ordered regions and dilute disordered regions, which is further corroborated by numerical analyses of corresponding theoretical models <cit.>.
Tuning the interaction between actin filaments by the addition of polyethylene glycol led to the emergence of a dynamic coexistence of ordered states with fluctuating nematic and polar symmetry <cit.>, which has been explained by pattern-induced symmetry breaking <cit.>. Systems exhibiting dense, purely nematic lanes have been thoroughly investigated by both simulations and hydrodynamic theories <cit.>.
As for half-integer topological defects, the common paradigm states that they are absent in dilute self-propelled active nematics <cit.>, but fundamental exclusion criteria for their existence have not been given.
In fact, no steady-state topological defects have yet been found in this subclass of strongly phase-separated active matter.
So far, it has only been observed that transient defects can occur in models with weak density inhomogeneity during the coarsening process <cit.>.
Moreover, toy models inspired by dilute nematic systems without self-propulsion can exhibit defect formation <cit.>.
However, the authors attest that the connection of their phenomenological theory to existing experimental systems is tenuous.
Here we investigate dilute active nematics for the presence of defects using an agent-based model of “weakly-aligning self-propelled polymers” (WASP) which has been shown to faithfully reproduce the behavior of real actomyosin motility assays on all relevant length and timescales including pattern formation processes and the topology of the phase diagram <cit.>.
This allows us to leverage these agent-based simulations as an in-silico experimental system with which to discover new phenomena.
We show that the two hitherto seemingly incompatible phenomena — phase separation and topological defects — are actually closely linked in weakly interacting active nematics.
In particular, we characterize a subclass of topological defects associated with the compression of nematic fluxes, which are similar to phenomena predicted in conceptual models <cit.>, albeit in a different context.
These defects appear as characteristic collective excitations in a novel nonequilibrium steady state. They are in dynamic equilibrium with nematic lanes from which they emerge and into which they disassemble.
Additionally, we find another type of topologically charged structure, filamentous arc ejections (FAEs) — elongated arc-shaped polymer bundles that detach from nematic bands — remotely resembling +1/2 defects.
To elucidate the mechanisms underlying these phenomena, we also introduce a hydrodynamic theory, building on previously published models <cit.>.
Exploiting the respective strengths of these two complementary theoretical approaches, we uncover a close relationship between the dynamics of phase-separated nematic bands, formation of topologically charged structures, and the associated condensation phenomena.
§ RESULTS
§.§ Simulation setup
We use agent-based simulations that emulate the dynamics of weakly interacting self-propelled polymers (WASP) of fixed length L on two-dimensional surfaces building on earlier work <cit.>; refer to the SI for further details on the algorithm.
Each polymer consists of a tail pulled by a tip that follows a trajectory corresponding to a persistent random walk with persistence length L_p.
Upon collision of a polymer tip with the contour of another polymer, a weak alignment torque is assumed to act that changes its direction of motion [Fig. <ref>(a)].
Here we use a purely nematic alignment interaction [Fig. <ref>(b)] whose strength is set by the parameter α_n.
Additionally, a small repulsion force F acts on polymer tips that overlap with other polymers.
Here we are interested in systems that have a collision statistics with purely nematic symmetry [Fig. <ref>(b)].
Figure <ref>(c) shows the phase diagram of such a weak nematic as a function of the average polymer density ⟨ρ⟩ L^2
and path persistence length L_p; hereafter ⟨ ... ⟩ denotes spatial averaging.
It exhibits an isotropic-nematic transition from a disordered homogeneous phase to a nematically ordered phase.
The phase boundary ρ_n (L_p) approximately scales as L_p^-1; refer to
the SI for details.
Thus, when the phase diagram is redrawn as a function of L_p and the spatially averaged normalized density ⟨ϕ⟩ = ⟨ρ⟩ / ρ_n, the phase boundary essentially becomes a horizontal line [inset of Fig. <ref>(c)].
§.§ Dense topologically charged structures
As expected for nematically interacting systems, our simulations show isolated nematic lanes that exhibit strong bending fluctuations on large length and time scales (cf. Movie S1 SI) caused by lateral instabilities <cit.>.
In our simulations, in addition to these typical nematic lanes, we also discover distinct types of topologically charged structures.
One class of these are three-armed filamentous structures containing a topological defect with charge -1/2 at their center [Fig. <ref>(a)].
They are typically formed when three curved nematic lanes — with their convex sides facing each other — meet and condense into a topological defect with a high-density core region [Fig. <ref>(b)]; we do not observe “collisions” of four lanes.
Unlike defects in non phase-separated active nematics, these condensed topological defects (CTDs) do not have a directly corresponding positively charged partner.
Instead, they are surrounded by an extended topologically charged region with a dispersed positive charge, as can be seen in Fig. <ref>(a) (lower right panel), which depicts the topological charge density as defined in Refs. <cit.>.
Moreover, our simulations show that the active nematic flux is gradually compressed as the triple junction of the nematic lanes (defect core) is approached
[Fig. <ref>(a), top right panel].
This leads to a reduction in lane width and a corresponding increase in density, which reaches a maximum in proximity of the core.
These three-armed topological defects are dynamic structures that are constantly being dissolved and reassembled.
A second class of structures we observe are lateral filamentous arcs that separate from the bulk of a straight nematic band and eventually move in transverse direction.
A time trace of such a filamentous arc ejection (FAE) is shown in Fig. <ref>(c).
These structures have similarities to +1/2 defects: they are “curved” and they always emanate in the direction of their convex side.
Somewhat similar observations have been made in continuum models constructed for nematic particles with velocity reversals <cit.>. However, the authors did not address the properties of these structures or the reasons underlying their formation.
While there are certainly similarities on a superficial phenomenological level between FAEs and these structures, the underlying mechanisms and nature of these structures may be quite different.
Having discovered these collective topological structures in our in-silico experiments, we sought to explore how their emergence is affected by a change of parameters.
However, since the lateral instabilities of nematic bands required for the formation of CTDs (cf. section “From CTDs to FAEs and bands” below) occur only on very long time scales, a systematic investigation of a phase diagram in agent-based simulation is numerically prohibitively demanding.
Therefore, we sought an alternative way to explore the spatiotemporal dynamics of the systems that would enable us to dissect the processes underlying the formation of CTDs and FAEs.
As explained next, we achieved this through constructing a hydrodynamic approach that captures all the main features of our agent-based simulation setup.
§.§ Hydrodynamic model provides access to the phase diagram
To this end we used the standard Boltzmann-like approach
(see SI).
However, as discussed below, this model was insufficient to explain the emergence of half-integer defects and was therefore generalized to include density-dependent corrections.
By analogy with passive model C in the Hohenberg-Halperin classification scheme <cit.> we formulate a hydrodynamic model in terms of a density and an order parameter field.
For an active nematic, these are the (normalized) polymer density
ϕ = ∫dθ P(θ)/ ρ_n,
and the traceless and symmetric tensor
Q_ij = ∫dθ P(θ)(2n_in_j- δ_ij) (nematic order parameter), where the unit vector 𝐧= (n_x,n_y)=(cos θ, sin θ) defines the the local polymer orientation vector and P(θ) denotes the probability density for the polymer orientation θ.
The eigenvector associated with the larger of the two eigenvalues of the Q-tensor can be viewed as depicting the average orientation of the polymers.
Unlike classical model C, however, a hydrodynamic model for active nematics must be intrinsically nonequilibrium in character and its dynamics can not be determined by the gradient descent in a single free-energy landscape.
Nevertheless, using the analogy to the dynamics near thermal equilibrium, some intuition can be gained for the design of the model.
As we discuss in more detail below, part of the system's dynamics can be understood in terms of two separate effective free-energy functionals for the non-conservative Q-tensor (F_Q) and the conservative density field (F_ϕ), similar to related nonequilibrium models discussed recently <cit.>.
Mass-conservation requires that the density obeys a continuity equation ∂_t ϕ = - ∂_i J_i.
In general, for symmetry reasons, the current must be the gradient of a scalar quantity and a tensorial quantity containing the Q-tensor.
Similar to model B, the scalar component is of the form
J_i^iso = -∂_i μ (ϕ)
with chemical potential
μ (ϕ) = ν (ϕ) ϕ.
Here, the first and second terms of ν (ϕ) = λ^2+ν_ϕϕ account for motility-induced effective diffusion with the diffusion constant λ^2 ∝ L_p^2 <cit.>, and for steric repulsion due to excluded-volume interactions <cit.>, respectively. The latter contribution represents the density-dependent correction.
For the tensorial part, we write J_i^aniso = -∂_j [χ(ϕ) Q_ij], which again is assumed to contain motility- and interaction-induced parts: χ (ϕ) =λ^2+χ_ϕϕ. Similar as above, the latter term represents the density-dependent correction motivated by theories for active nematics <cit.>, and it is controlled by the phenomenological parameter χ_ϕ.
It will turn out that this anisotropic term leads to phase separation, since it causes compression in the direction perpendicular to the axis of the local orientational order.
Taken together, one gets
∂_tϕ
=
∂_i∂_j[
ν(ϕ)ϕ δ_ij
+
χ(ϕ) Q_i j]
.
The isotropic flux (first term) can be written in terms of an effective free-energy functional F_ϕ= ∫d^2 x (1/2λ^2 ϕ^2+1/3ν_ϕϕ^3).
In contrast, however, the anisotropic flux (second term in (<ref>)) violates time-reversal symmetry <cit.>.
We assume the time evolution of the nematic tensor to be of the form
∂_t Q_i j
= -[
δ F_Q/δ Q_ij]^st
=
-[
δ F_Q/δ Q_ij
- 1/2 δ_ij Tr(δ F_Q/δ Q_ij)
]
,
which corresponds to a gradient dynamics (model A) determined by the effective free-energy functional F_Q; here and in the following [...]^st denotes the traceless and symmetric part of a tensor.
We have chosen the timescale such that the friction coefficient in the gradient dynamics is set to 1.
The effective free-energy functional has a standard Landau-deGennes (LdG) part <cit.> responsible for an isotropic to nematic transition, but also includes a coupling between density gradients and the orientation of polymers as in inhomogeneous active nematics <cit.>,
F_Q
=
∫ d^2 x
(
12
[
(1-ϕ)Q^2
+ 12β (Q^2)^2
+ κ (∂_jQ_ij)^2
]
-
Q_ij[
ω ∂_i∂_jϕ
+
ω^a(∂_iϕ)(∂_jϕ)
]
)
.
The LdG free-energy density in terms of the order parameter Q^2= Q_klQ_kl describes a nematic ordering transition at the critical density ϕ_c = 1 with the gradient term playing the role of a generalised elasticity.
The stiffness coefficient (or Frank constant) κ also contains two contributions, one from the motility of the polymers <cit.>, and the other due to interactions <cit.>: κ (ϕ) = 12 λ^2+κ_ϕ⟨ϕ⟩.
Note that the last term — the density-dependent correction to elasticity — is linearised around the mean value of density ⟨ϕ⟩
(see SI).
The second line in (<ref>) takes into account the coupling between density gradients and nematic order, and can be derived solely on the basis of symmetry considerations.
The functional derivatives of F_Q with respect to the nematic tensor correspond to “interfacial torques” <cit.> in the equation of motion for the nematic tensor.
They rotate the director at the interface between high- and low-density domains, where the gradients of ϕ are the strongest.
The lowest-order coupling — and the associated “aligning torque” <cit.> ω [∂_i∂_jϕ]^st —
is iconic for active nematics <cit.>.
It is responsible for the destabilization of straight nematic lanes, eventually resulting in lane undulations (or other types of chaotic behavior associated with “dry active turbulence” <cit.>).
In our case, this term is due to self-advection (ω=λ^2, see
SI)
but it can be considered as “diffusive” since anisotropic diffusion of particles leads to an analogous contribution.
Interaction between the polymers yields the next-order couplings in (<ref>).
On symmetry grounds there are two different terms quadratic in ϕ:
[ϕ ∂_i∂_jϕ]^st and [(∂_iϕ) (∂_jϕ)]^st; both can also be obtained by explicitly coarse-graining microscopic models for interacting active polymers <cit.>.
The former recalls the diffusive ω-term (especially after the linearization around ⟨ϕ⟩) and therefore is ignored here.
The latter is associated with torque, which is bilinear in the density gradients ω^a [(∂_iϕ) (∂_jϕ)]^st, providing an effective liquid-crystalline “anchoring” <cit.> (or preferred orientation) of the nematic director field with respect to the density gradients.
The parameter ω^a is taken to be negative to ensure tangential anchoring, implying that polymers tend to orient perpendicular to the density gradients (or parallel to the boundary of dense lanes).
For simplicity, we ignore additional non-linearities in the equation of motion for the Q-tensor. Such contributions are considered elsewhere <cit.> where they are typically regarded as a modification to the elasticity terms.
Taken together Eqs. (<ref>, <ref>) are a generalization of the active model C <cit.>, which was originally introduced for non self-propelled biofilaments in the presence of molecular motors. The major difference is that the model now explicitly includes self-propulsion. Moreover, by including density-dependent terms, it shows the same results as the agent-based simulations (see discussion below) and is therefore quantitatively linked to the actomyosin motility assay. Finally, it possesses less degrees of freedom, since most of the terms are rigorously derived and are controlled by the same parameter (λ).
We consider
ν_ϕ, χ_ϕ, κ_ϕ, ω and ω^a as phenomenological parameters and solve the equations of motion numerically.
This model robustly reproduces the results obtained in the agent-based simulation to a very high degree of fidelity and for a large range of parameters.
It exhibits CTDs and FAEs whose structure, topological charge, and formation process are very similar to the ones observed in WASP; cf. Fig. <ref>(d)-(f). Therefore, in the following we use this hydrodynamic approach to analyse and underpin the main mechanisms of formation of CTDs and FAEs.
In summary, our model (and the active model C <cit.>) differs significantly from the standard theory of active nematics <cit.>, since it contains density-dependent corrections and higher order terms. Without such modifications the standard active nematic model is unable to reproduce CTDs.
§.§ From CTDs to FAEs and bands
Encouraged by the promising initial results shown by our hydrodynamic theory, we took advantage of the relative ease with which it can be used to determine the long-term behavior, and generated a (λ, ⟨ϕ⟩) phase diagram [Fig. <ref>(a)].
As can be seen, at low values of λ and ⟨ϕ⟩, CTD formation dominates, while in areas of large λ and ⟨ϕ⟩ stable nematic lanes emerge.
Between these regions lies a band of parameters where the system mainly exhibits FAEs.
To test whether these findings obtained with the hydrodynamic model also hold for our agent-based simulations, we determined the average number of CTDs present at a given time in the agent-based simulation along one-dimensional lines of the (L_p, ⟨ϕ⟩) phase space — one along a constant value of ⟨ϕ⟩ and one along a constant value of L_p.
Reassuringly, the results for the agent-based simulations and hydrodynamic model are in good agreement [Figs. <ref>(c) and (d)].
We further checked the mean number of FAEs present in the agent-based simulations as a function of L_p [Fig. <ref>(e)];
see SI for details.
The observed decline in FAE frequency with increasing L_p is consistent with the observations in the hydrodynamic model, where at high λ no FAEs occur [cf. Fig. <ref>(a)].
Taken together, these results demonstrate that not only do the agent-based and hydrodynamic models share the same collective states, the frequency of these states also shows the same dependence on parameter changes.
The above relationships between model parameters and the occurrence of CTDs or FAEs can be related to the overall dynamic behavior (in short, “activity”) of the system.
For both hydrodynamic and agent-based approaches, three distinct, qualitatively different dynamic states can be distinguished [Fig. <ref>(b)].
The first of these is associated with very strong bending undulations of nematic lanes.
It occurs at low values of L_p/λ or ⟨ϕ⟩ and is characterized by constant rearrangement of lanes [Movies S2, S3, S7, SI, Figs. <ref>(a), (b), (d) and (e)]:
Lanes frequently collide leading to the formation of CTDs. In addition, system-spanning configurations of straight (or only slightly curved) lanes [cf. Figs. <ref>(c) and (f)], which may form randomly, are disrupted by undulations within a fairly short time.
This is consistent with the observation that CTDs are the predominant phenomenon at low values of L_p/λ and ⟨ϕ⟩, respectively [Figs. <ref>(c), (d)].
Notably, FAEs can also be formed in this parameter regime following the emergence of short-lived system-spanning nematic lanes.
The second dynamic state can be found at intermediate values of L_p/λ or ⟨ϕ⟩.
In this regime, bending undulations are fewer and less pronounced, resulting in straight (or only slightly curved) and system-wide lanes that are stable over long periods of time:
Elongated openings often appear in the lateral areas of the lanes, which develop into filamentous arcs
[Movies S4, S8, SI, and Figs. <ref>(c),(f) and middle panel of Fig. <ref>(b)].
This is in accordance with the observation that FAEs are the predominant phenomenon observed at intermediate values of L_p/λ or ⟨ϕ⟩ [Figs. <ref>(a) and (c)-(e)].
The third dynamic state is associated with vanishing bending undulations at high values of L_p/λ or ⟨ϕ⟩. Here, straight and system-spanning configurations are stable and no openings develop in their lateral regions [Movies S5, S9, SI and right panel of Fig. <ref>(b)]. Consequently, neither FAEs nor CTDs are observed [Figs. <ref>(a) and (c)-(e)].
The tendency just discussed for the bending undulations to become weaker as the values of L_p/λ or ⟨ϕ⟩ are increased from low to high values can be rationalized by the following heuristic arguments.
With increasing L_p/λ the Frank constant <cit.> grows, and the effective elasticity (or collective stiffness of the polymers) yields stronger penalties for orientational distortions.
As a result, the bending instability weakens, as described above.
The hydrodynamic model has allowed us to verify this hypothesis: upon varying the elastic constant κ (independently from other parameters), we observe that weak elasticity favors the formation of CTDs, while a strong one yields stable bands.
As the density ⟨ϕ⟩ is increased (for a given and constant system size), a further effect contributing to higher stability of lanes is that a system-spanning nematic band occupies a growing fraction of space, i.e., the bands become wider while the bulk density remains largely the same [cf. SI].
Since broader bands are less susceptible to a bending instability, an increase of ⟨ϕ⟩, as discussed above, leads to the decay of defect formation.
An interesting aside can be mentioned here in the context of varying values of ⟨ϕ⟩: for very small densities, close to the onset of order, both models show a drop in the observed CTD number [Fig. <ref>(d)], which is likely due to the fact that there is less mass within the ordered phase, and therefore not enough mass to form multiple curved bands necessary for lanes to collide and CTDs to be created.
Overall, the formation of condensed defects and filamentous arc ejections are both strongly linked to the stability of the nematic lanes, i.e., to their propensity to exhibit a bending instability <cit.>, which, in turn, can be externally controlled by tuning either L_p/λ or ⟨ϕ⟩.
§.§ Detailed structure of CTDs and FAEs
To better understand the structure of the CTDs forming in agent-based simulations, we studied the polymer flows through them in detail.
To this end, we tracked the motion of each polymer as it passed through a condensed defect.
This enables us to distinguish the polymer flows from one to another arm of a defect and investigate whether there is a relationship between the lateral position of individual polymers and their eventual direction of turning.
Fig. <ref>(a) illustrates the flux from one arm of a defect (arm 1) into the two other arms (arms 2 and 3) [see Movie S6 SI for a representative flux recorded in an agent-based simulation].
The flux in each defect arm gets strongly compressed laterally in the vicinity of a defect core and then splits almost exactly at the centerline of the lane, while undergoing a sharp change in direction [Fig. <ref>(a)].
Symmetrically the same flux enters the defect from arms 2 and 3, resulting in the nematic flow structure depicted in Fig. <ref>(a) and (c).
This also shows that the flows begin to mix again only at a greater distance from the center of the defect [cf. color mixing in Fig. <ref>(b) and (c)]. Hence, the overall topology often present at the birth of the defect [Fig. <ref>(b) and (e)] is preserved in the flow structure of the fully formed CTD as three barely intermingling nematic flows.
In addition, we investigated whether the velocity of the polymers is affected as they move through a CTD. As can be seen from Fig. <ref>(e), their speed remains almost unchanged and only a slowdown in the per mil range is observed. One can see two insignificant velocity drops corresponding to regions with the maximal density of polymers. Interestingly, in the immediate vicinity of the core of the defect, the particle velocity briefly returns to the average value, corresponding to particles inside the nematic band.
We also studied the temporal evolution of FAEs and their occurrence over time. To this end, we periodically projected the density of a system in a configuration that allows the formation of FAEs onto one-dimensional slides and stacked them to obtain kymographs (see SFig. 5 SI).
These reveal that the detachment of arcs accelerate over time.
Further, they show that in the hydrodynamic model, due to no noise being present, FAE events occur at regular intervals, whereas in the agent-based simulations they form stochastically.
Having established the existence of CTDs and FAEs, and characterized them in our agent-based in-silico experimental system, and having successfully introduced a hydrodynamic theory that faithfully reproduces the results of the simulations as well as providing access to the phase space of the observed pattern, we asked: why are these phenomena observed? What are the underlying mechanisms responsible for their formation?
To answer these questions, we leveraged the ability of the hydrodynamic model to provide access to single terms of its defining equations [Eqs. (<ref>,<ref>)].
This analysis reveals that both the formation of dense defects and the movement of arcs have the same root cause, namely the anisotropic (“curvature-induced”) density flux <cit.>, described by -∂_j(χ Q_ij) in Eq. (<ref>) in the hydrodynamic model.
This can be understood by plotting -∂_j(χ Q_ij) in the region of an FAE or a CTD; see the left and right panels of Fig. <ref>(d), respectively.
As can be seen, on opposite sides of the arcs the amplitudes of the fluxes are distinct. An effective “active force” acting on the concave side is greater than that on the opposite side, which leads to the movement of the bent band (or arc) in the corresponding direction [Fig. <ref>(d), left panel].
When three lanes meet, the same curvature-dependent fluxes concentrate polymers in the core of the resulting defect [Fig. <ref>(d), right panel]. This condensation is eventually balanced by the isotropic part of (<ref>) and particularly by steric repulsion of polymers.
To test this hypothesis, we set the excluded volume force F (see SI)
to zero in our agent-based simulations.
Observations in this case indicate that the formation of CTDs is reduced and that, when they form, they decay faster.
Thus, we conclude that formation of the dense defects is predominantly determined by the interplay between two counteracting processes: isotropic and anisotropic density fluxes.
In addition to the “emergent” way of obtaining CTDs just studied, in which spontaneously formed bands interact randomly and spontaneously condense into defects at stochastically distributed positions, we have sought a way to overcome this limitation by artificially generating and positioning CTDs.
In contrast to non-phase-separated systems — where such an endeavor would involve the forced separation of a defect pair — the way CTDs form spontaneously [Figs. <ref>(b),(e)] suggests that finding a way to position and form nematic lanes in suitable configurations could trigger the creation of a CTD.
In combination with the observation of polymer fluxes near a defect [Fig. <ref>(h)], we hypothesized that placing active polymer sources in a three-strand configuration should trigger the formation of three lanes that immediately condensate into CTDs.
To test this prediction, we implemented the possibility to add such “active particle throwers” into our agent-based simulations and positioned them as described.
Indeed, we found that this way a CTD can be formed at a predetermined location where it persists for an arbitrary amount of time, cf. Fig. <ref>(h) and movie S10 SI.
This may be of potential application in cases where topological defects and/or high-density regions (in a low density background) need to be created and controlled with high accuracy.
§ DISCUSSION
In summary, we have used a combination of agent-based simulations and hydrodynamic theory to study pattern formation in phase-separated nematic active matter.
Our analysis shows that topological defects and nematic lanes, previously considered as two distinct and separate collective states, coexist and are tightly coupled.
We investigated the structure, formation and decomposition of CTDs in phase-separated systems.
We observed that CTDs appear as characteristic collective excitations in a novel nonequilibrium steady state.
Moreover, the formation process of CTDs constitutes a new hierarchical condensation phenomenon.
Given the previously demonstrated and close connection of our agent-based algorithm to the actin motility-assay, a paradigmatic experimental model system, it is plausible to expect that CTDs will be observed in experimental active matter systems.
Below we discuss these observations step by step.
First of all, we characterized topologically charged structures, such as CTDs and FAEs, for the first time observed in a phase-separated nematic system with self-propulsion.
It is apparent that CTDs differ markedly from defects observed in homogeneous active matter, particularly in the dynamics of their formation and decay and in their spatial structure as well.
To begin with, CTDs upconcentrate density nearby their cores and condensate nematic fluxes.
This condensation phenomena is interesting by itself, since the majority of experimental active matter systems show a depletion of particles in -1/2 disclinations, e.g., bacteria embedded in liquid crystals <cit.> and cultures of neural progenitors <cit.>.
Weak density accumulation around the defects has been discussed in slightly inhomogeneous nematic <cit.>;
however, in such systems, the -1/2 defects occur only during the transient and eventually disappear via annihilation with their +1/2 counterparts.
Similar CTDs, among other structures, were observed in parameter sweeps of the phenomenological toy model for mixtures of non-self-propelled microtubules and kinesin motors <cit.>.
However, they were either transient or formed only under very special conditions (elasticity almost zero).
In the latter case, the shape and the mechanism of formation of the defects were clearly different from the CTDs observed here.
In our case CTDs are typically formed by the collision of three curved nematic lanes that condense into a high-density three-armed structure, trapping the previously spatially distributed negative charge [Figs. <ref>(a),(d)].
One might think of comparing condensation to CTDs with the process of motility-induced phase separation (MIPS) <cit.>.
However, the fundamental difference between the two is that CTDs are not associated with particle slowdown or prolonged residence of agents in high-density regions.
In addition, the formation of condensed defects provides a condensation mechanism for anisotropically shaped particles, which is not possible with MIPS <cit.>.
We may also argue that in MIPS the agents themselves condense into high-density clusters, while we observe the condensation of dynamical collective states (nematic lanes) into topological defects.
The mutual orientation of defects is also non-typical: we observe that two CTDs can be connected by a single nematic streamline (a filamentous bundle of polymers) [Figs. <ref>(a), <ref>(f)], whereas in non-phase-separated active matter negative half-integer disclinations usually point towards a corresponding defect with the opposite charge +1/2 [Fig. <ref>(g)] <cit.>.
The dynamic processes of defect decay in phase-separated and homogeneous active nematics are also clearly distinct.
In homogeneous systems, pairs of defects with opposite charges annihilate each other <cit.>. In contrast, we find that CTDs do not annihilate with other defects, but disintegrate due to the undulating dynamics of the lanes that connect to the defect arms (Fig. <ref>(g) and Movie S3 SI).
This means that the destruction of a negatively charged defect does not depend on the mobility or dynamics of a positively charged pair, rendering this process potentially easier to control.
In cases where all three lanes that connect to the respective arms have the same bending orientation (curvature of all either clockwise or anti-clockwise with respect to center), this decay takes place via an interesting process in which defects rotate before they dissolve [Fig. <ref>(g)].
Thus, CTDs not only emerge from “collisions” of nematic lanes, but also are connected by, and disassemble into them.
Taken together, this leads to one of the main conclusions of our work, namely that the presence of CTDs constitutes
a novel nonequilibrium steady state which corresponds to a dynamic equilibrium between dense nematic lanes and condensed topological defects coexisting in a diluted background of disordered filaments.
This is reminiscent of other recent findings in active matter, in which a dynamical coexistence between patterns of different symmetry (nematic and polar) was observed <cit.>. During the persistent formation and subsequent decay of CTDs, those defects act as temporal capacitors of negative topological charge (i.e., the curvature on the boundaries of lanes gets temporarily trapped in a very small region of space) which eventually gets released again.
It is well worth reiterating that this is a continuous cyclic phenomenon, not a transient one (unlike the defect formation observed in Ref. <cit.>).
The most important factors that allow this nonequilibrium steady state to occur are probably the following.
First, since CTDs emerge from interaction of curved nematic lanes, a lateral undulation instability of nematic lanes — as exhibited by our agent-based model — is a basic prerequisite for their formation.
Another factor that is likely to favor the formation of CTDs is the nature of the interaction between the polymers (agents), which exhibit only weak mutual alignment and weak steric exclusion.
The latter, in particular, is likely to be a critical factor necessary for the high compression of polymer density during CTD formation.
Starting from a rigorously derived hydrodynamic model for self-propelled particles, we have generalized it to include higher-order phenomenological corrections.
The resulting equations are reminiscent of a conceptual active model C <cit.>, but they include all terms arising from particle self-propulsion, which is an important additional feature here.
In particular, the hydrodynamic model presented here has many fewer degrees of freedom than the toy model presented in Ref. <cit.>, since the coefficients in front of all “standard” terms have a fixed relation among them.
This hydrodynamic theory provides additional insight into the physics of CTDs.
For example, it shows that density gradients play a crucial role through their coupling with the orientation field.
In particular, we consider density-dependent corrections of these coupling terms (controlled by the parameters χ_ϕ and ω^a), which typically disappear due to the linearization of terms around the mean value of density in the majority of hydrodynamic theories.
We want to stress again that these additional terms, which are missing in standard theories of active nematics, are crucial for a proper description of the system, because without them CTDs are no longer observed.
We argue that strong phase separation (and the resulting large density gradients) inevitably amplifies the effect of higher-order coupling terms between the density and the orientation field on the dynamics.
For example, the bilinear anchoring ω^a(∂_iϕ)(∂_jϕ) causes the nematic lines to closely follow the contour of the density field constituting a defect (SFig. 7 SI) and therefore can stabilize defects.
This is in line with the observation that a decrease in ω^a leads to a decrease in the number of defects (similar conclusion can be referred from <cit.>). However, in our model CTDs still can be formed even if ω^a=0, χ_ϕ≠0,κ_ϕ≠0.
We firmly believe that the phenomena we found can also be observed in experiments, even though our study is purely theoretical.
The weakly aligning, self-propelled polymers simulation approach we base our study on, has previously shown not only excellent agreement with experiments, but was also able to predict then novel states that were later found in experiments <cit.>; thus it can be viewed, as elaborated in the introduction, as a computational version of an experimental system.
In light of this, we expect that the most promising experimental model system that could allow observation of the new topological defects we predict is most likely the actomyosin motility assay <cit.>.
This paradigmatic system not only satisfies the requirement of weakly interacting agents <cit.>, but also offers the advantage of high particle numbers.
Previously, not only polar waves <cit.> but also nematic lanes <cit.> have been observed.
This has been achieved by adding depletion agents that enable one to tune the strength as well as the symmetry of the interaction between the actin filaments.
It is conceivable that similar and other changes in the design of the actin motility assay could be used to produce a weak and purely nematic interaction as used in our agent-based simulations.
For example, other depletion agents could be used and/or the properties of the surface to which the driving molecular motors are attached could be changed.
Recently, the latter was indeed shown to have a direct impact on polymer interactions <cit.>.
Alternatively, CTDs could potentially be observed in other types of motility assays <cit.>.
Another intriguing possibility for observing the predicted CTDs is to directly produce a configuration of nematic lanes favoring the formation of CTDs by suitably structuring the surface used in the motility assay <cit.>.
The deep understanding we gained about the formation of CTDs owing to the combination of agent-based simulation and hydrodynamic approach allowed us to find a way to generate them artificially (Fig. <ref>(h) and movie S10 SI). Given the availability of directed particle sources in an experimental system, the position of defects (and therefore the location of a domain of extremely high density) could be controlled with pin-point accuracy.
This provides a new tool for cases where -1/2 defects and/or small regions of high particle density (in an overall dilute system) are needed at specific positions, e.g., to trigger specific processes such as cell death <cit.> at definable points.
Given the strong and controlled nature of the focusing of the fluxes in nematic lanes, this method could be termed “active matter optics”.
Another important insight from the broader perspective of the active matter field is that
phase-separated active matter exhibits a hierarchy of emergent collective states.
Interaction between dense nematic lanes, considered as “first-order” collective states in active nematics, can lead to the formation of “second-order” collective states, here half-integer topological defects with an even higher density.
A phenomenon which one can call “hierarchical, alignment-induced phase-separation”.
It is reasonable to assume that similar effects may lead to new phenomena in other active systems with different symmetry, e.g., polar symmetry with polar waves as first-order collective states <cit.>.
Another class of systems in which higher-order collective states might emerge are active systems that are subject to external gradients <cit.>
or signalling interactions between the agents <cit.>.
A promising extension of our present investigations are active foams.
In this state of active matter, which has recently received increasing attention <cit.>, dense ordered bands assemble into actively reforming cellular networks.
Indeed, in preliminary simulations of the hydrodynamic theory, we have identified parameter regimes in our model where we observe active foams: CTDs are more frequent, interconnected, and persist for longer times.
Thus, the formation of active foams in active nematics seems very plausible, but a thorough investigation of the entire phase space in the agent-based model is computationally demanding and will be reserved for a future study.
§ AUTHOR CONTRIBUTIONS
T.K., I.M., and E.F. designed the research, performed research, analyzed data, and wrote the paper.
§ CONFLICTS OF INTEREST
There are no conflicts to declare.
§ ACKNOWLEDGEMENTS
We acknowledge financial support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the Excellence Cluster ORIGINS under
Germany's
Excellence Strategy (EXC-2094-390783311) and through Project-ID 201269156 -
Collaborative Research Center (SFB) 1032 - Project B2.
IM acknowledges European Union's Framework Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie Skłodowska-Curie Grant Agreement No. 754388 (LMU Research Fellows) and from LMUexcellent, funded by the Federal Ministry of Education and Research (BMBF) and the Free State of Bavaria under the Excellence Strategy of the German Federal Government and the Länder.
§ APPENDIX
§.§ Agent-based simulation method
We now describe our agent-based simulation model.
Please also refer to the SI and the Supplemental Materials of Refs. <cit.> for more details.
In our systems we simulate M polymers, each of length L.
Orientational diffusion causes the tip of each polymer to perform a persistent random walk. Upon collision with another polymer, local interaction causes the tip to gradually align with its direction.
Attached to the polymer tips are tails that just follows the path that is outlined by the tip.
This dynamics mimics the behavior of actin filaments in actomyosin motility assays <cit.>, in which polymers move in a snake-like fashion over a lawn of motor proteins and motion orthogonal to the contour is suppressed <cit.>.
Here we use purely nematic interactions between polymers which are primarily tuned by the nematic alignment amplitude α_n that allows for a continuous variation of the rate of alignment.
§.§ Parameters
If not stated otherwise, we used the following model parameters: discretization N = 5, polymer aspect ratio L/d = 21, nematic alignment strength α_n = 0.126≈7.2^∘ and a periodic simulation box of length L_box = 162.5L.
The velocity v^(n) of each polymer is randomly drawn from the interval [0.75,1.]v_0.
We started simulations with random initial conditions, i.e. randomly oriented polymers were placed at random positions in the simulation box.
Time is measured in units of L/v_0, where v_0 is the maximal velocity of a free polymer.
Density in Figs. <ref>(a)-(c) and <ref>(g)-(h) is time-averaged for better visibility, with averaging times of 159 for Fig. <ref>(a) and 16 for Figs. <ref>(b)-(c) and <ref>(g)-(h).
Note that the system shown in Fig. <ref>(h) does not have the usual periodic boundary conditions. Rather, the particles crossing the boundaries are moved either to a random position along a boundary with random orientation or to one of the particle sources. The ratio of these two possibilities is chosen so that the particle flux from the sources is kept constant.
§.§ Continuous theory
We numerically investigate Eqs. (<ref>,<ref>) under periodic boundary conditions by using finite differences of second order <cit.> on a 300×300 grid with the spatial resolution δ x = 0.5.
The time integration was performed via a second-order predictor-corrector scheme with time step dt = 10^-2.
We use the parameter values β=0.05, κ_ϕ=0.2, ω^a=-0.5, χ_ϕ=0.4, ν_ϕ=1.
Unless explicitly stated, we initialize simulations from an isotropic uniform state
with a small amount of noise. To make time and space dimensionless we rescale them by setting the rotational diffusion coefficient and μ_ρ equal to unity.
72
urlstyle
[De Gennes and Prost(1993)]de1993physics
Pierre-Gilles De Gennes and Jacques Prost.
The physics of liquid crystals.
Number 83. Oxford university press, 1993.
[Marchetti et al.(2013)Marchetti, Joanny, Ramaswamy, Liverpool, Prost,
Rao, and Simha]Marchetti2013
M Cristina Marchetti, Jean-François Joanny, Sriram Ramaswamy,
Tanniemola B Liverpool, Jacques Prost, Madan Rao, and R Aditi Simha.
Hydrodynamics of soft active matter.
Rev. Mod. Phys., 850 (3):0 1143,
10.1103/RevModPhys.85.1143.
[Doostmohammadi et al.(2018)Doostmohammadi, Ignés-Mullol, Yeomans,
and Sagués]Doostmohammadi2018
Amin Doostmohammadi, Jordi Ignés-Mullol, Julia M Yeomans, and Francesc
Sagués.
Active nematics.
Nat. Commun., 90 (1):0 3246,
10.1038/s41467-018-05666-8.
[Alert et al.(2022)Alert, Casademunt, and Joanny]alert2021active
Ricard Alert, Jaume Casademunt, and Jean-François Joanny.
Active turbulence.
Annu. Rev. Condens. Matter Phys., 130 (1):0
143–170,
10.1146/annurev-conmatphys-082321-035957.
[Sanchez et al.(2012)Sanchez, Chen, DeCamp, Heymann, and
Dogic]sanchez_spontaneous_2012
Tim Sanchez, Daniel T. N. Chen, Stephen J. DeCamp, Michael Heymann, and
Zvonimir Dogic.
Spontaneous motion in hierarchically assembled active matter.
Nature, 4910 (7424):0 431–434,
10.1038/nature11591.
[DeCamp et al.(2015)DeCamp, Redner, Baskaran, Hagan, and
Dogic]Decamp2015
Stephen J DeCamp, Gabriel S Redner, Aparna Baskaran, Michael F Hagan, and
Zvonimir Dogic.
Orientational order of motile defects in active nematics.
Nat. Mater., 140 (11):0 1110–1115,
https://doi.org/10.1038/nmat4387.
[Giomi et al.(2013)Giomi, Bowick, Ma, and Marchetti]giomi_defect_2013
Luca Giomi, Mark J. Bowick, Xu Ma, and M. Cristina Marchetti.
Defect Annihilation and Proliferation in Active Nematics.
Phys. Rev. Lett., 1100 (22):0 228101,
10.1103/PhysRevLett.110.228101.
[Shankar et al.(2018)Shankar, Ramaswamy, Marchetti, and
Bowick]shankar_defect_2018
Suraj Shankar, Sriram Ramaswamy, M. Cristina Marchetti, and Mark J. Bowick.
Defect Unbinding in Active Nematics.
Phys. Rev. Lett., 1210 (10):0 108002,
10.1103/PhysRevLett.121.108002.
[Thampi et al.(2014)Thampi, Golestanian, and
Yeomans]thampi_instabilities_2014
Sumesh P. Thampi, Ramin Golestanian, and Julia M. Yeomans.
Instabilities and topological defects in active nematics.
Europhys Lett., 1050 (1):0 18001,
10.1209/0295-5075/105/18001.
[Giomi et al.(2014)Giomi, Bowick, Mishra, Sknepnek, and
Cristina Marchetti]Giomi2014
Luca Giomi, Mark J Bowick, Prashant Mishra, Rastko Sknepnek, and
M Cristina Marchetti.
Defect dynamics in active nematics.
Philos. Trans. R. Soc. A, 3720 (2029):0
20130365,
https://doi.org/10.1098/rsta.2013.0365.
[Putzig et al.(2016)Putzig, Redner, Baskaran, and
Baskaran]putzig_instabilities_2016
Elias Putzig, Gabriel S. Redner, Arvind Baskaran, and Aparna Baskaran.
Instabilities, defects, and defect ordering in an overdamped active
nematic.
Soft Matter, 120 (17):0 3854–3859,
10.1039/C6SM00268D.
[Maryshev et al.(2019)Maryshev, Goryachev, Marenduzzo, and
Morozov]Maryshev2019Dry
Ivan Maryshev, Andrew B Goryachev, Davide Marenduzzo, and Alexander Morozov.
Dry active turbulence in a model for microtubule–motor mixtures.
Soft Matter, 150 (30):0 6038–6043,
10.1039/c9sm00558g.
[Schaller et al.(2010)Schaller, Weber, Semmrich, Frey, and
Bausch]schaller_polar_2010
Volker Schaller, Christoph Weber, Christine Semmrich, Erwin Frey, and
Andreas R. Bausch.
Polar patterns of driven filaments.
Nature, 4670 (7311):0 73–77,
10.1038/nature09312.
[Butt et al.(2010)Butt, Mufti, Humayun, Rosenthal, Khan, Khan, and
Molloy]butt_myosin_2010
Tariq Butt, Tabish Mufti, Ahmad Humayun, Peter B. Rosenthal, Sohaib Khan,
Shahid Khan, and Justin E. Molloy.
Myosin Motors Drive Long Range Alignment of Actin Filaments.
J. Biol. Chem., 2850 (7):0 4964–4974,
10.1074/jbc.M109.044792.
[Grégoire and Chaté(2004)]gregoire_onset_2004
Guillaume Grégoire and Hugues Chaté.
Onset of Collective and Cohesive Motion.
Phys. Rev. Lett., 920 (2):0 025702,
10.1103/PhysRevLett.92.025702.
[Solon et al.(2015)Solon, Chaté, and Tailleur]solon_phase_2015
Alexandre P. Solon, Hugues Chaté, and Julien Tailleur.
From Phase to Microphase Separation in Flocking Models:
The Essential Role of Nonequilibrium Fluctuations.
Phys. Rev. Lett., 114:0 068101,
10.1103/PhysRevLett.114.068101.
[Huber et al.(2021)Huber, Krüger, and Frey]huber_microphase_2021
Lorenz Huber, Timo Krüger, and Erwin Frey.
Microphase separation in active filament systems maintained by cyclic
dynamics of cluster size and order.
Phys. Rev. Res., 30 (1):0 013280,
10.1103/PhysRevResearch.3.013280.
[Huber et al.(2018)Huber, Suzuki, Krüger, Frey, and
Bausch]Huber2018
L Huber, R Suzuki, T Krüger, E Frey, and AR Bausch.
Emergence of coexisting ordered states in active matter systems.
Science, 3610 (6399):0 255–258,
DOI: 10.1126/science.aao5434.
[Denk and Frey(2020)]denk_pattern-induced_2020-1
Jonas Denk and Erwin Frey.
Pattern-induced local symmetry breaking in active-matter systems.
Proc. Natl. Acad. Sci. U.S.A., 1170 (50):0
31623–31630,
10.1073/pnas.2010302117.
[Ginelli et al.(2010)Ginelli, Peruani, Bär, and
Chaté]ginelli_large-scale_2010
Francesco Ginelli, Fernando Peruani, Markus Bär, and Hugues Chaté.
Large-scale collective properties of self-propelled rods.
Phys. Rev. Lett., 1040 (18):0 184502,
10.1103/PhysRevLett.104.184502.
[Peshkov et al.(2012)Peshkov, Aranson, Bertin, Chaté, and
Ginelli]Peshkov2012
Anton Peshkov, Igor S Aranson, Eric Bertin, Hugues Chaté, and Francesco
Ginelli.
Nonlinear field equations for aligning self-propelled rods.
Phys. Rev. Lett., 1090 (26):0 268701,
10.1103/PhysRevLett.109.268701.
[Ngo et al.(2014)Ngo, Peshkov, Aranson, Bertin, Ginelli, and
Chaté]ngo_large-scale_2014
Sandrine Ngo, Anton Peshkov, Igor S. Aranson, Eric Bertin, Francesco Ginelli,
and Hugues Chaté.
Large-Scale Chaos and Fluctuations in Active Nematics.
Phys. Rev. Lett., 113:0 038302,
10.1103/PhysRevLett.113.038302.
[Großmann et al.(2016)Großmann, Peruani, and
Bär]grosmann_mesoscale_2016
Robert Großmann, Fernando Peruani, and Markus Bär.
Mesoscale pattern formation of self-propelled rods with velocity
reversal.
Phys. Rev. E, 940 (5):0 050602,
10.1103/PhysRevE.94.050602.
[Maryshev et al.(2020)Maryshev, Morozov, Goryachev, and
Marenduzzo]Maryshev2020
Ivan Maryshev, Alexander Morozov, Andrew B Goryachev, and Davide Marenduzzo.
Pattern formation in active model c with anchoring: bands, aster
networks, and foams.
Soft Matter, 160 (38):0 8775–8781,
10.1039/d0sm00927j.
[Cai et al.(2019)Cai, Chaté, Ma, and Shi]Cai2019
Li-Bing Cai, Hugues Chaté, Yu-Qiang Ma, and Xia-Qing Shi.
Dynamical subclasses of dry active nematics.
Phys. Rev. E, 99:0 010601,
10.1103/PhysRevE.99.010601.
[Großmann et al.(2020)Großmann, Aranson, and
Peruani]grosmann_particle-field_2020
Robert Großmann, Igor S. Aranson, and Fernando Peruani.
A particle-field approach bridges phase separation and collective
motion in active matter.
Nat. Commun., 110 (1):0 5365,
10.1038/s41467-020-18978-5.
[Chaté(2020)]chate_dry_2020
Hugues Chaté.
Dry aligning dilute active matter.
Annu. Rev. Condens. Matter Phys., 110 (1),
10.1146/annurev-conmatphys-031119-050752.
[Mishra et al.(2014)Mishra, Puri, and Ramaswamy]mishra2014aspects
Shradha Mishra, Sanjay Puri, and Sriram Ramaswamy.
Aspects of the density field in an active nematic.
Philos. Trans. R. Soc. A, 3720 (2029):0
20130364,
10.1098/rsta.2013.0364.
[Bertin et al.(2013)Bertin, Chaté, Ginelli, Mishra, Peshkov, and
Ramaswamy]bertin_mesoscopic_2013
Eric Bertin, Hugues Chaté, Francesco Ginelli, Shradha Mishra, Anton Peshkov,
and Sriram Ramaswamy.
Mesoscopic theory for fluctuating active nematics.
New J. Phys., 150 (8):0 085032,
10.1088/1367-2630/15/8/085032.
[Blow et al.(2014)Blow, Thampi, and Yeomans]Blow2014
Matthew L Blow, Sumesh P Thampi, and Julia M Yeomans.
Biphasic, lyotropic, active nematics.
Phys. Rev. Lett., 1130 (24):0 248303,
0.1103/PhysRevLett.113.248303.
[Hohenberg and Halperin(1977)]HohenbergHalperin
Pierre C Hohenberg and Bertrand I Halperin.
Theory of dynamic critical phenomena.
Rev. Mod. Phys., 490 (3):0 435,
https://doi.org/10.1103/RevModPhys.49.435.
[Li and Cates(2021)]li_hierarchical_2021
Yuting I. Li and Michael E. Cates.
Hierarchical microphase separation in non-conserved active mixtures.
Eur. Phys. J. E, 440 (9):0 119,
10.1140/epje/s10189-021-00113-x.
[Baskaran and Marchetti(2012)]baskaran_self-regulation_2012
A. Baskaran and M. C. Marchetti.
Self-regulation in self-propelled nematic fluids.
Eur. Phys. J. E, 350 (9),
10.1140/epje/i2012-12095-8.
[Ahmadi et al.(2006)Ahmadi, Marchetti, and
Liverpool]ahmadi2006hydrodynamics
Aphrodite Ahmadi, M Cristina Marchetti, and Tanniemola B Liverpool.
Hydrodynamics of isotropic and liquid crystalline active polymer
solutions.
Phys. Rev. E, 740 (6):0 061913,
10.1103/PhysRevE.74.061913.
[Baskaran and Marchetti(2010)]baskaran2010nonequilibrium
Aparna Baskaran and M Cristina Marchetti.
Nonequilibrium statistical mechanics of self-propelled hard rods.
J. Stat. Mech. Theory Exp., 20100 (04):0
P04019,
10.1088/1742-5468/2010/04/P04019.
[Maryshev et al.(2018)Maryshev, Marenduzzo, Goryachev, and
Morozov]Maryshev2018
Ivan Maryshev, Davide Marenduzzo, Andrew B Goryachev, and Alexander Morozov.
Kinetic theory of pattern formation in mixtures of microtubules and
molecular motors.
Phys. Rev. E, 970 (2):0 22412,
10.1103/PhysRevE.97.022412.
[Cates(2019)]cates2019active
Michael E Cates.
Active field theories.
arXiv preprint,
10.48550/arXiv.1904.01330.
[Shaebani et al.(2020)Shaebani, Wysocki, Winkler, Gompper, and
Rieger]shaebani2020computational
M Reza Shaebani, Adam Wysocki, Roland G Winkler, Gerhard Gompper, and Heiko
Rieger.
Computational models for active matter.
Nature Reviews Physics, 20 (4):0 181–199,
https://doi.org/10.1038/s42254-020-0152-1.
[Sulaiman et al.(2006)Sulaiman, Marenduzzo, and
Yeomans]sulaiman2006lattice
N Sulaiman, D Marenduzzo, and JM Yeomans.
Lattice boltzmann algorithm to simulate isotropic-nematic emulsions.
Phys. Rev. E, 740 (4):0 041708,
https://doi.org/10.1103/PhysRevE.74.041708.
[Araki and Tanaka(2004)]araki2004nematohydrodynamic
Takeaki Araki and Hajime Tanaka.
Nematohydrodynamic effects on the phase separation of a symmetric
mixture of an isotropic liquid and a liquid crystal.
Phys. Rev. Lett., 930 (1):0 015702,
https://doi.org/10.1103/PhysRevLett.93.015702.
[Mishra et al.(2010)Mishra, Simha, and Ramaswamy]mishra2010dynamic
Shradha Mishra, R Aditi Simha, and Sriram Ramaswamy.
A dynamic renormalization group study of active nematics.
J. Stat. Mech. Theory Exp., 20100 (02):0
P02003,
10.1088/1742-5468/2010/02/P02003.
[Putzig and Baskaran(2014)]Putzig2014
Elias Putzig and Aparna Baskaran.
Phase separation and emergent structures in an active nematic fluid.
Phys. Rev. E, 900 (4):0 042304,
https://doi.org/10.1103/PhysRevE.90.042304.
[Sato and Teramoto(1996)]sato1996frank
Takahiro Sato and Akio Teramoto.
On the frank elastic constants of lyotropic polymer liquid crystals.
Macromolecules, 290 (11):0 4107–4114,
https://doi.org/10.1021/ma950986a.
[Ramaswamy et al.(2003)Ramaswamy, Simha, and
Toner]ramaswamy2003active
S Ramaswamy, R. Aditi Simha, and J Toner.
Active nematics on a substrate: Giant number fluctuations and
long-time tails.
Europhys Lett., 620 (2):0 196–202,
10.1209/epl/i2003-00346-7.
[Simha and Ramaswamy(2002)]simha2002hydrodynamic
R Aditi Simha and Sriram Ramaswamy.
Hydrodynamic fluctuations and instabilities in ordered suspensions of
self-propelled particles.
Phys. Rev. Lett., 890 (5):0 058101,
https://doi.org/10.1103/PhysRevLett.89.058101.
[Narayan et al.(2007)Narayan, Ramaswamy, and
Menon]narayan_long-lived_2007
V. Narayan, S. Ramaswamy, and N. Menon.
Long-Lived Giant Number Fluctuations in a Swarming
Granular Nematic.
Science, 3170 (5834):0 105–108,
10.1126/science.1140414.
[Genkin et al.(2017)Genkin, Sokolov, Lavrentovich, and
Aranson]genkin2017topological
Mikhail M Genkin, Andrey Sokolov, Oleg D Lavrentovich, and Igor S Aranson.
Topological defects in a living nematic ensnare swimming bacteria.
Phys. Rev. X, 70 (1):0 011029,
https://doi.org/10.1103/PhysRevX.7.011029.
[Kawaguchi et al.(2017)Kawaguchi, Kageyama, and
Sano]kawaguchi_topological_2017-1
Kyogo Kawaguchi, Ryoichiro Kageyama, and Masaki Sano.
Topological defects control collective dynamics in neural progenitor
cell cultures.
Nature, 5450 (7654):0 327–331,
10.1038/nature22321.
[Cates and Tailleur(2015)]cates2015motility
Michael E Cates and Julien Tailleur.
Motility-induced phase separation.
Annu. Rev. Condens. Matter Phys., 60 (1):0
219–244,
https://doi.org/10.1146/annurev-conmatphys-031214-014710.
[Van Der Linden et al.(2019)Van Der Linden, Alexander, Aarts, and
Dauchot]van2019interrupted
Marjolein N Van Der Linden, Lachlan C Alexander, Dirk GAL Aarts, and Olivier
Dauchot.
Interrupted motility induced phase separation in aligning active
colloids.
Phys. Rev. Lett., 1230 (9):0 098001,
https://doi.org/10.1103/PhysRevLett.123.098001.
[Shankar and Marchetti(2019)]shankar2019hydrodynamics
Suraj Shankar and M Cristina Marchetti.
Hydrodynamics of active defects: From order to chaos to defect
ordering.
Phys. Rev. X, 90 (4):0 041047,
https://doi.org/10.1103/PhysRevX.9.041047.
[Cortese et al.(2018)Cortese, Eggers, and Liverpool]cortese2018pair
Dario Cortese, Jens Eggers, and Tanniemola B Liverpool.
Pair creation, motion, and annihilation of topological defects in
two-dimensional nematic liquid crystals.
Phys. Rev. E, 970 (2):0 022704,
https://doi.org/10.1103/PhysRevE.97.022704.
[Hussain et al.(2013)Hussain, Molloy, and
Khan]hussain_spatiotemporal_2013
Saman Hussain, Justin E. Molloy, and Shahid M. Khan.
Spatiotemporal Dynamics of Actomyosin Networks.
Biophys. J., 1050 (6):0 1456–1465,
10.1016/j.bpj.2013.08.001.
[Suzuki and Bausch(2017)]suzuki_emergence_2017
Ryo Suzuki and Andreas R. Bausch.
The emergence and transient behaviour of collective motion in active
filament systems.
Nat. Commun., 80 (1):0 41,
10.1038/s41467-017-00035-3.
[Suzuki et al.(2015)Suzuki, Weber, Frey, and Bausch]suzuki_polar_2015
Ryo Suzuki, Christoph A. Weber, Erwin Frey, and Andreas R. Bausch.
Polar pattern formation in driven filament systems requires
non-binary particle collisions.
Nat. Phys., 110 (10):0 839–843,
10.1038/nphys3423.
[Sciortino and Bausch(2021)]sciortino_pattern_2021
Alfredo Sciortino and Andreas R. Bausch.
Pattern formation and polarity sorting of driven actin filaments on
lipid membranes.
Proc. Natl. Acad. Sci. U.S.A., 1180 (6):0
e2017047118,
10.1073/pnas.2017047118.
[Sumino et al.(2012)Sumino, Nagai, Shitaka, Tanaka, Yoshikawa, Chaté,
and Oiwa]sumino_large-scale_2012-1
Yutaka Sumino, Ken H. Nagai, Yuji Shitaka, Dan Tanaka, Kenichi Yoshikawa,
Hugues Chaté, and Kazuhiro Oiwa.
Large-scale vortex lattice emerging from collectively moving
microtubules.
Nature, 4830 (7390):0 448–452,
10.1038/nature10874.
[Memarian et al.(2021)Memarian, Lopes, Schwarzendahl, Athani,
Sarpangala, Gopinathan, Beller, Dasbiswas, and Hirst]memarian_active_2021
Fereshteh L. Memarian, Joseph D. Lopes, Fabian Jan Schwarzendahl,
Madhuvanthi Guruprasad Athani, Niranjan Sarpangala, Ajay Gopinathan,
Daniel A. Beller, Kinjal Dasbiswas, and Linda S. Hirst.
Active nematic order and dynamic lane formation of microtubules
driven by membrane-bound diffusing motors.
Proc. Natl. Acad. Sci. U.S.A., 1180 (52):0
e2117107118,
10.1073/pnas.2117107118.
[Turiv et al.(2020)Turiv, Koizumi, Thijssen, Genkin, Yu, Peng, Wei,
Yeomans, Aranson, Doostmohammadi, and Lavrentovich]turiv_polar_2020
Taras Turiv, Runa Koizumi, Kristian Thijssen, Mikhail M. Genkin, Hao Yu,
Chenhui Peng, Qi-Huo Wei, Julia M. Yeomans, Igor S. Aranson, Amin
Doostmohammadi, and Oleg D. Lavrentovich.
Polar jets of swimming bacteria condensed by a patterned liquid
crystal.
Nat. Phys., 160 (4):0 481–487,
10.1038/s41567-020-0793-0.
[Sciortino et al.(2022)Sciortino, Neumann, Krüger, Maryshev,
Teshima, Wolfrum, Frey, and Bausch]sciortino_defects_2022
Alfredo Sciortino, Lukas J Neumann, Timo Krüger, Ivan Maryshev, Tetsuhiko F
Teshima, Bernhard Wolfrum, Erwin Frey, and Andreas R Bausch.
Polarity and chirality control of an active fluid by passive nematic
defects.
Nat. Mater.,
10.1038/s41563-022-01432-w.
[Saw et al.(2017)Saw, Doostmohammadi, Nier, Kocgozlu, Thampi, Toyama,
Marcq, Lim, Yeomans, and Ladoux]saw_topological_2017-1
Thuan Beng Saw, Amin Doostmohammadi, Vincent Nier, Leyla Kocgozlu, Sumesh
Thampi, Yusuke Toyama, Philippe Marcq, Chwee Teck Lim, Julia M. Yeomans, and
Benoit Ladoux.
Topological defects in epithelia govern cell death and extrusion.
Nature, 5440 (7649):0 212–216,
10.1038/nature21718.
[Popescu et al.(2018)Popescu, Uspal, Bechinger, and
Fischer]popescu_chemotaxis_2018
Mihail N. Popescu, William E. Uspal, Clemens Bechinger, and Peer Fischer.
Chemotaxis of Active Janus Nanoparticles.
Nano Lett., 180 (9):0 5345–5349,
10.1021/acs.nanolett.8b02572.
[Lavergne et al.(2019)Lavergne, Wendehenne, Bäuerle, and
Bechinger]lavergne_group_2019
François A Lavergne, Hugo Wendehenne, Tobias Bäuerle, and Clemens
Bechinger.
Group formation and cohesion of active particles with visual
perception-dependent motility.
Science, 3640 (6435):0 70–74,
10.1126/science.aau5347.
[Ziepke et al.(2022)Ziepke, Maryshev, Aranson, and
Frey]alex_preprint_2022
Alexander Ziepke, Ivan Maryshev, Igor S Aranson, and Erwin Frey.
Multi-scale organization in communicating active matter.
Nat. Commun., 13,
10.1038/s41467-022-34484-2.
[Nagai et al.(2015)Nagai, Sumino, Montagne, Aranson, and
Chaté]nagai_collective_2015-1
Ken H. Nagai, Yutaka Sumino, Raul Montagne, Igor S. Aranson, and Hugues
Chaté.
Collective Motion of Self-Propelled Particles with
Memory.
Phys. Rev. Lett., 1140 (16):0 168001,
10.1103/PhysRevLett.114.168001.
[Ventejou et al.(2021)Ventejou, Chaté, Montagne, and
Shi]ventejou2021susceptibility
Bruno Ventejou, Hugues Chaté, Raul Montagne, and Xia-qing Shi.
Susceptibility of orientationally ordered active matter to chirality
disorder.
Phys. Rev. Lett., 1270 (23):0 238001,
https://doi.org/10.1103/PhysRevLett.127.238001.
[Lemma et al.(2022)Lemma, Mitchell, Subramanian, Needleman, and
Dogic]lemma2022active
Bezia Lemma, Noah P Mitchell, Radhika Subramanian, Daniel J Needleman, and
Zvonimir Dogic.
Active microphase separation in mixtures of microtubules and
tip-accumulating molecular motors.
Phys. Rev. X, 120 (3):0 031006,
https://doi.org/10.1103/PhysRevX.12.031006.
[Abramowitz and Stegun(1964)]AbramowitzStegun
Milton Abramowitz and Irene A. Stegun.
Handbook of Mathematical Functions with Formulas, Graphs, and
Mathematical Tables.
Dover, New York, 1964.
[Bertin et al.(2006)Bertin, Droz, and
Grégoire]bertin_boltzmann_2006
Eric Bertin, Michel Droz, and Guillaume Grégoire.
Boltzmann and hydrodynamic description for self-propelled particles.
Phys. Rev. E, 74:0 022101,
10.1103/PhysRevE.74.022101.
[Bertin et al.(2009)Bertin, Droz, and Grégoire]bertin_2009
Eric Bertin, Michel Droz, and Guillaume Grégoire.
Hydrodynamic equations for self-propelled particles: microscopic
derivation and stability analysis.
J. Phys. A Math. Theor., 420 (44):0 445001,
10.1088/1751-8113/42/44/445001.
[Peshkov et al.(2014)Peshkov, Bertin, Ginelli, and
Chaté]peshkov_boltzmann-ginzburg-landau_2014
A. Peshkov, E. Bertin, F. Ginelli, and H. Chaté.
Boltzmann-Ginzburg-Landau approach for continuous
descriptions of generic Vicsek-like models.
Eur. Phys. J.: Spec. Top., 2230 (7):0
1315–1344,
10.1140/epjst/e2014-02193-y.
[Ngo et al.(2012)Ngo, Ginelli, and Chaté]ngo_competing_2012-2
Sandrine Ngo, Francesco Ginelli, and Hugues Chaté.
Competing ferromagnetic and nematic alignment in self-propelled polar
particles.
Phys. Rev. E, 860 (5):0 050101,
10.1103/PhysRevE.86.050101.
*format=largeformat
§ SUPPLEMENTARY INFORMATION
§ WASP SIMULATION METHOD
In this section we provide a brief summary of the agent-based simulations.
The focus will be on the aspects most relevant for the current study.
For a detailed description of the WASP simulation setup, please refer to the supplemental materials of Refs. <cit.>.
In the agent-based simulations, we consider M polymers moving on a flat substrate (in two spatial dimensions).
Each polymer n consist of N spherical joints j which are located at a positions 𝐫_j^(n) (with j ∈ { 0, 1, …, N - 1 }, where the polymer tip is denoted by j = 0).
The direction of a polymer's tip is denoted by 𝐮_0^(n) and its motion is described by:
∂_t 𝐫_0^(n) = v^(n) 𝐮_0^(n) -𝐅_𝐫𝐞𝐩
= v^(n)(
[ cosθ_0^(n); sinθ_0^(n) ])-𝐅_𝐫𝐞𝐩 .
Here 𝐅_𝐫𝐞𝐩 describes a weak repulsion force (see (<ref>)) acting on a polymer head while in contact with the contour of another polymer.
θ_0^(n) denotes the orientation of a polymer and v^(n) its free speed.
For this study, the speed of each polymer was chosen at random from a continuous uniform distribution in the interval [0.75, 1] v_0, where v_0 denotes the maximal velocity of a free polymer (see section S<ref> for further details on this velocity dispersion).
The orientation of a polymer's head evolves in time according to
∂_t θ_0^(n) =
- δH̃_0^(n)/δθ_0^(n)
+ √(2v^(n)/L_p) ξ ,
where ξ is random white noise with zero mean and unit variance with the magnitude of the noise given by the prefactor.
This implies that individual polymers perform a persistent random walk with a path persistence length of L_p.
H̃_0^(n) sets the—in this study purely nematic—torque caused by interactions with other polymers.
Before we come to a description of H̃_0^(n), it will proof useful to introduce several other quantities.
The first is the distance vector
Δ𝐫_nm
=
(
𝐫_0^(n) -
𝐫^(m))_shDist .
This vector connects the tip of a polymer n with the position of an adjacent polymer's (denoted by m) contour that has the shortest possible distance.
The local orientation of the contour of the adjacent polymer m is given by θ_j^(m), which corresponds to the orientation of the polymer segment j of polymer n to which Δ𝐫_nm connects.
Second, if a polymer is interacting with several polymers at a time, we define a weighted average direction of the connecting vectors:
Δ𝐞_n
:=
∑_m
C(
|Δ𝐫_nm|
)
Δ𝐫_nm/|Δ𝐫_nm| .
Here C(
|Δ𝐫_nm|
) is a weighting factor accounting for the assumption that a more distant polymer contributes less to an interaction.
It is given by
C(
|Δ𝐫_nm|
)
=
{[ 0 if |Δ𝐫_nm|>d; (d- |Δ𝐫_nm|)/d else ].
,
where d defines the interaction radius.
Using the orientation of the averaged connecting vector θ̃_n, we define an averaged nematic impact angle as Δθ̃^(n)_n = θ_0^(n) - θ̃_n.
Equipped with these definitions we are now in a position to write down the alignment potential as
H̃_0^(n)
:=
α_n v_0/dcos (2Δθ̃^(n)_n) |Δ𝐞_n|
,
where the overall amplitude of the alignment is set by the absolute value of the weighted connecting vector, combined with the nematic alignment strength α_n.
The repulsion force 𝐅_𝐫𝐞𝐩 in (<ref>) is given by
𝐅_𝐫𝐞𝐩 =
-s ∑_m
C
(
|Δ𝐫_nm|
)
Δ𝐫_nm/|Δ𝐫_nm| ,
which is used to prevent unphysical aggregation of polymers. It is assumed to be weak with s = 0.05.
Filaments in actomyosin motility assays are observed to conduct a trailing motion, where the tail of a polymer follows the movement of the tip <cit.>.
To emulate this behaviour, tail joints move according to
∂_t 𝐫_j^(n)
=
K_s (
| 𝐫_j^(n)-𝐫_j-1^(n)| - b
) 1/2(
𝐮_j+1^(n)+𝐮_j^(n))
.
Here, the second part of the equation, 1/2
(
𝐮_j+1^(n) + 𝐮_j^(n)
), ensures the movement to be in the direction of the average of the segment's orientations that are adjacent to joint j.
The remainder of (<ref>) corresponds to a linear (Hookian) restoring force with spring coefficient K_s = 200 that ensures an average length b of the cylindrical segments between bonds.
§ ONSET OF NEMATIC PATTERNS
In this section we provide further information on how the phase diagram shown in Fig. 1(c) of the main text was obtained.
To determine the density ρ_n as a function of L_p above which nematic patterns are formed, we performed exploratory simulations in the phase space spanned by the (reduced) global polymer density ⟨ρ⟩ L^2
and the persistence length L_p.
To guarantee that the dynamics has reached a steady state we ran these simulations for a time 15 873 which is much larger than the initial timescale t_0 ≈ 100 it takes for a system to reach the quasi-stationary, disordered state <cit.>.
Figure <ref> shows the results of the in silico parameter scans in density at a set of fixed values for L_p: The blue triangles and red squares correspond to steady states where we visually observed nematic patterns or a disordered state, respectively.
To determine the phase boundary ρ_n (L_p) we fitted a function f_ρ(L_p) = a/L_p (with a as free fitting parameter) to the data points with the lowest density that still exhibited nematic order [solid line in Fig. <ref>].
The shape of the boundary line is dictated by the interplay between two counteracting effects: density-dependent, interaction-induced ordering and rotational diffusion.
The former increases linearly with density increase, and above the critical value of density, spontaneous ordering begins to predominate over diffusion.
Thus, the critical density is proportional to rotational diffusion coefficient and therefore ∝ L_p^-1 in our case.
We take f_ρ(L_p) as an approximation to the density corresponding to the onset of nematic patterns, ρ_n (L_p).
To further test whether this is a satisfactory approximation for the phase boundary, we ran ten independent simulations at a density corresponding to ρ_n [cf. dots in Fig. 1 (c) of the main text] and further ten at 0.9 ρ_n for several different L_p for a twice as large simulation time of 31 746.
All simulations at ρ_n formed ordered patterns, while none at 0.9 ρ_n did, affirming that f_ρ(L_p) adequately approximates the position of the isotropic-nematic transition.
§ DEFECT DETECTION
In this section, we explain the algorithms we used to identify topological defects in simulations of both the hydrodynamic theory and the agent-based model.
To algorithmically detect -1/2 defects in
both approaches, we took advantage of the fact that inside a defect core the topological charge density q, defined as <cit.>
q = 1/4 π( ∂_xQ̂_x a∂_yQ̂_y a - ∂_xQ̂_y a∂_yQ̂_x a),
has a very large negative value (with Q̂=Q/ρ and Q defined as in (<ref>)), whereas in other regions of space its absolute value is much smaller (cf. lower right pane of Fig. 2(a) and (d) of the main text). We exploit this fact and define any contiguous region of space in which q falls below a certain threshold value q_thrs as one -1/2 defect.
The position of -1 / 2 defects in the agent-based model is obtained in the following way.
Please first note that the main purpose of the data from the agent-based simulations in Fig. 3(c)-(e) is to qualitatively confirm the trend observed in the hydrodynamic model. To quantify the data with a high degree of precision would require averaging over large ensembles, which would be numerically prohibitively demanding given the very long time scales on which the observed phenomena occur.
The total runtime of each simulation was 142 857 (which is much longer than the dynamics of undulations; cf. Movie S1 and S2), from which we cutted an initial transient (cf. section S<ref>) before starting the measurement.
For each value of L_p/⟨ϕ⟩ we averaged over ten independent simulations.
To obtain q in agent-based simulations, we rasterized space into a grid with a grid spacing of Δ x = 0.3, which is small enough to resolve the structure of a defect (note that the qualitative agreement between the agent-based simulations and hydrodynamic model, shown in Fig. 3 of the main text, does not depend on the exact choice of this and the following numerical parameters).
We used the orientations θ_0^(n) of polymer tips residing inside each grid point at a given time to calculate a local value of Q̂ using (<ref>). To suppress noise due to stochastic particle fluctuations, we further averaged over a time span of 15.9, which is much shorter than density rearrangements due to bending undulations.
With this we obtained q(𝐫, t) using (<ref>).
We chose q_thrs = - 0.032, which is much lower than typical values of q outside defects.
Additionally, to avoid classifying small and short-lived density peaks that occur sporadically in the simulations as CTDs, we heuristically filtered them out by requiring the charge density to be below q_thrs for a time of at least 159 for a CTD to be detected.
The hydrodynamic model allows by construction a direct access to the Q-tensor, which allows a direct calculation of the function q, given by Eq. <ref>. The positions of -1 / 2 defects are defined as local minima of the function q and, for consistency, the same value of q_thrs is used as for the agent based simulations.
For the measurements in the hydrodynamic model, we discarded the data collected in the first half of the simulation runs in order to avoid any influence of initial transients.
To generate the data shown in Fig. 3 (a), we classified all runs in which CTDs were detected to be CTD-dominated (blue dots in Fig. 3 (a)). Distinction between FAEs and stable bands was made via visual inspection.
§ FLUX MEASUREMENT THROUGH DEFECTS
In the main text, we studied the mass flow through a defect as well as the speed of particles during a CTD passage; see Figs. 4(b) and 4(e), respectively.
To this end, we needed detailed information about the position and velocity of particles as they transitioned from one arm of a defect to another.
To determine these quantities, we leveraged the possibility offered by the agent-based simulations to access the position of each individual polymer at any given point in time.
In order to be able to deduce that a given polymer has transitioned from one arm of a defect to another one, several things have to be known.
First, one has to find a criterion which allows to algorithmically determine if a polymer is pertinent to a given arm at a given time.
For this we used the following heuristics:
Over each arm of a defect we placed a round “classification area”, which is large enough to cover the full width of the nematic lane (blue regions in Fig. <ref>, diameter 22 L).
The positions of the classification areas were chosen such that they roughly coincided with the area where the nematic lanes recovered their full width (midpoint distance of classification areas to defect: 26 L in Fig. <ref>).
Every polymer being inside one of these regions is classified as pertinent to the given defect arm.
Second, one has to find a criterion that allows to make a determination as to the origin of particles that have been classified as belonging to a particular arm.
For this we introduced an additionally classification area which encompasses all parts of the simulation box being further away from the defect core than a specific distance, cf. orange region in Fig. <ref> (distance to defect: 40 L).
(Note that the black colored area does not pertain to any classification area.)
After this partitioning, we measured the currents from one region to another with the below described heuristics.
We did this for a time span sufficiently long enough that many particles can travel from one blue region to another blue region (cf. Fig. <ref>), but short enough such that bending undulations do not change the position of the individual lanes significantly.
Data in Fig. 4(b) averaged over 159, Fig. 4(e) averaged over 4 019 trajectories in a time of 317.
For the flux measurement heuristics, we each assigned a unique identifier id to every classification area.
We then checked in short intervals of 0.16 for every polymer i if its position coincided with one of the classification areas.
If this was the case, polymer i was assigned the identifier of the region and the time of assignment t_assign was saved.
If polymer i already had a different identifier id' assigned (and hence also a different t_assign'), this meant that it had traveled from another classification area into the current region (without crossing a third region in the meantime).
In such a case, we stored the pairs of tuples (id', t_assign') and (id, t_assign), which allow (combined with with the also saved information of the position and speed of every polymer at every interval) to reconstruct the path polymer i has taken propagating from region id' to id. Subsequently, we replaced the assigned identifier and assignment time of polymer i with that of the current region and the current time and continued the simulation.
§ DISPERSION IN THE POLYMER VELOCITY
Most studies of active matter assume the speed of agents to be constant and uniform <cit.>. Yet, experiments of the actin motility assay show actin filaments to have a broad distribution of velocities <cit.>.
To take into account the effects of such a velocity dispersion, we drew the assigned speed of polymers from a distribution (cf. Section S<ref> of this Supplemental Material).
We have found that the introduction of such a velocity dispersion does not hinder the formation of nematic lanes.
To additionally check whether particles that possess different free velocities behave differently on the level of macroscopic structures—for example by causing an effective sorting of particles into spatially separate populations, where only relatively fast/slow particles form part of patterns—we subdivided the system into a grid with a grid spacing of Δ x = 0.3 and determined for each grid-cell the locally averaged ⟨ v^(n)⟩ of particles inside a simulation exhibiting nematic lanes and CTDs.
Any local accumulation of fast/slow particles would lead to a different value of ⟨ v^(n)⟩ when compared to the global average ⟨ v^(n)⟩_glob.
As can be inferred from Fig. <ref>, the system is well mixed (up to random fluctuations) with respect to polymer velocities.
We further found that the introduction of a velocity dispersion prevented the decay of purely nematic patterns into oppositely propagating polar waves (cf. Ref <cit.>), which hence seems to be an artefact of the assumption of equal and uniform velocities.
§ WIDTH OF NEMATIC LANES
As discussed in
the main text,
we measured the width of nematic lanes as a function of density ⟨ϕ⟩ in both the agent-based simulations and the hydrodynamic model (at a constant system size).
To this end, we performed several simulations at different polymer densities but at a fixed persistence length (resp. several realizations of the hydrodynamic model at different ⟨ϕ⟩ and fixed λ).
After these systems had reached a configuration in which they exhibited a single straight lane, we measured the width of the band and the average density ⟨ϕ⟩_bg in the disordered background.
(The width is determined by averaging the density of the system along the axis of the straight lane, which results in a one dimensional density profile.
The width of the lanes in the hydrodynamic model is then defined as the distance between the two points with the maximal gradient of this curve, which can easily be obtained due to the absence of noise.
In the agent based simulations the lane width is heuristically defined as the width of the region where this profile exceeds the threshold of three times ⟨ϕ⟩_bg.)
As shown in Fig. <ref>, the thickness of the lanes grows linearly with density in both the agent-based simulations and hydrodynamic model, while the density of the disordered background remains constant.
§ FAE DETECTION
In this section we describe the procedure we used to measure the mean number of FAEs present at different parameter regimes in the agent-based simulation (Fig. 3(e) of the main text).
For this we logged the formation of every FAE in the investigated systems; the most reliable method for detecting FAEs turned out to be manual inspection of simulation videos.
To obtain the mean number of FAEs present, we divided the total lifetime of all detected FAEs in the system by the total observation time.
For every investigated L_p in the agent-based simulations, we averaged over ten independent simulations, which each ran for a time of 142 857.
It is worth to note that agent-based simulations started in a parameter regime in which systems predominantly exhibit FAEs or stable lanes (i.e., high L_p; see also section “From CTDs to FAEs and bands” in the main text),
do not immediately form straight lanes at the onset of pattern formation, but frequently at first dwell in a state of high activity (cf. left panel of Fig. 3(b) in the main text) in which no FAE can develop.
We measured the duration of this initial transient (“dwell-time”) and found that it is shorter than a time of 70 000 in more than ninety percent of the cases.
We discarded this initial time span in the measurements of the mean numbers of CTDs (cf. section S<ref>) and FAEs present to rule out any influence of the initial transient on the results.
Further, we studied the temporal evolution of filamentous arc ejections.
The motion of a separating arc in the agent based and the hydrodynamic model, can be visualized using a kymograph of the density projection shown in Fig. <ref>.
As can be inferred from the bending of the lateral extrusions, the separation process of the arcs starts slowly and continues to accelerate until complete ejection and eventual dissolvement of the arc.
§ HYDRODYNAMIC MODEL
To provide the motivation of our hydrodynamic model we start form the general form of the evolution equation for the probability distribution function P(𝐫,θ,t):
∂_t P(𝐫,θ,t)
=
- L_p ∂_i [ n_i P(𝐫,θ,t) ]
+ ∂_θ^2 P(𝐫,θ,t) +interactions ,
where 𝐧=(cosθ,sinθ) is director vector, and L_p is the path persistence length of the polymers.
Time is measured in units of the diffusion coefficient.
Note that we only consider rotational diffusion and neglect translational diffusion.
In the following the space and time dependencies of the probability density are suppressed for brevity.
Contribution from the interaction between the polymers can be introduced in the form of collision intergrals in the Boltzmann ansatz <cit.>, or by using the gradient of the interaction-induced current in a Smoluchowski approach <cit.>.
We define the particle density ρ, the polarity vector 𝐩, and the nematic Q-tensor as the first three moments of the probability distribution function:
ρ
:=
∫_0^2 πdθ
P (θ)
,
p_i
:=
∫_0^2 πdθ
n_i P(θ)
,
Q_ij
:=
∫_0^2 πdθ (
2n_i n_j-δ_i j) P (θ)
,
where the subscripts i and j denote the Cartesian components and δ_ij represents the Kronecker delta.
It is convenient to consider Fourier harmonics of the probability distribution function:
P(𝐫, θ)=∑_k=-∞^∞ P_k(𝐫) e^i k θ.
According to their definitions, ρ , p_i, and Q_ij can be expressed via Fourier harmonics as follows:
ρ =
2 π P_0 ,
p_i
=
π(
(P_1 +P_-1 ), i(P_1 -P_-1 )
)
,
Q_ij =
π(
(P_2 +P_-2 ), i (P_2 -P_-2 )
)
,
where the symbol i denotes the imaginary unit.
By introducing the projection onto the m^th harmonics of P:
(…)^ m
:=
1/2 π∫_0^2 πdθ e^-i m θ(…)
,
one obtains the following contributions from the advective and diffusive parts of (<ref>) to the evolution equations of the m_th Fourier harmonics (P_m):
∂_t P_m =
-m^2 P_m
-L_p∂_i(n_iP(𝐫,θ))^ m
=
-m^2 P_m
- L_p1/2[
∂_x∑_kP_k (δ_k,m-1+δ_k,m+1)
+∂_y∑_k P_k (δ_k,m-1-δ_k,m+1)/i]
.
In terms of the collective variables this can be rewritten as:
∂_tρ =
- L_p∂_ip_i
,
∂_t p_i
=
-p_i- L_p/2∂_iρ +L_p/2∂_jQ_ij ,
∂_t Q_ij =
-4Q_ij
-L_p/2[
∂_ip_j+∂_jp_i-δ_ij∂_kp_k
]
.
Note, that we imply summation for repeating indices following the Einstein convention.
Since we consider a system with purely nematic interactions, the polar order decays on short time scales for all strengths of self-propulsion.
Thus, the polarity field 𝐩 equilibrates fast and can be eliminated adiabatically to arrive at dynamic equations for the density ρ and Q-tensor alone.
We find after rescaling time by a factor of 4:
∂_tρ =
λ^2Δρ
+ λ^2∂_i∂_jQ_ij ,
∂_t Q_ij =
-Q_ij
+λ^2/2Δ Q_ij
+λ^2
[
∂_i∂_jρ]^st ,
where we have introduced the parameter λ:=L_p/(2√(2)), Δ=∂_i∂_i denotes the Laplace operator, and [...]^st indicates the symmetric and traceless part of the expression.
We now discuss the physical meaning of each term on the RHS of
Eqs. (<ref>).
The first term in the density equation Eq. (<ref>) acts like effective translational diffusion, despite the fact that it is actually coming from the single particle advection (note, that the real translational diffusion is neglected in our model).
The second term in equation Eq. (<ref>) represents anisotropic flux of material along the nematic order. This term enhances diffusion along the direction of the eigenvector of Q_ij
corresponding to its positive eigenvalue, and suppresses it along the perpendicular direction. It also can be treated as curvature-induced flux, since it disappears in a uniformly ordered state.
The first term in the evolution equation of the nematic tensor Eq. (<ref>) is due to the thermal rotational diffusion. If there were no interaction between polymers, the action of this term would lead to disordering.
The second term in Eq. (<ref>) penalizes the distortion of Q_ij and represents the elasticity in terms of liquid crystal theory.
The last term of Eq. (<ref>) provides the coupling between the equations. It can be treated simply as an anisotropic diffusive contribution. But it also introduces “aligning torque” by changing the orientation of nematic order in the presence of the density gradients.
Finally, besides the diffusion- and advection-related terms we need to add interaction-induced contributions.
Inspired by Refs. <cit.> we also introduce the following terms to describe the nematic interactions of the polymers:
∂_tρ =
⋯
+
ν̃_ρΔρ^2
+
χ̃_ρ∂_i∂_j(ρ Q_ij)
,
∂_t Q_ij =
⋯
+
α̃ρ Q_ij
-
β̃Q^2Q_ij
+
κ̃_ρ⟨ρ⟩Δ Q_ij
+ω̃^a
[
2∂_iρ∂_jρ]^st .
The ν̃_ρ-related term in Eq. (<ref>) comes from the excluded volume interactions between the polymers (however an analogous term occurs due to the “collision" of polymers, e.g., see Ref. <cit.>).
The last term in Eq. (<ref>) is an interaction-induced flux representing a density-dependant correction <cit.> to the last term of Eq. (<ref>).
The first term of Eq. (<ref>) promotes density dependent ordering, which competes with motility-induced disordering coming from the fist term of Eq. (<ref>); β is a non-equilibrium Landau coefficient setting the magnitude of order in the bulk.
κ̃_ρ⟨ρ⟩ contributes to the restoring elastic constant. As can be seen, this is the only term in our theory that is linearized around the mean density value, whereas in the most of hydrodynamic models almost all terms in Eq. (<ref>) are subjected to this procedure. We linearize this particular term for two reasons. Firstly, for the sake of simplicity: we want this term to represent one particular effect – elasticity (or “rigidity” in terms of the material). Secondly, with this linearization it's simpler to interpret the term κ̃_ρ⟨ρ⟩Δ Q_ij as stemming from a free energy, while the contribution κ̃Δ(ρ Q_ij) could not be obtained from a free energy.
Finally, the last term of Eq. (<ref>) describes the non-equilibrium anchoring to the density interface <cit.>.
We emphasize again that we are not linearizing ν̃_ρ, χ̃_ρ, and ω̃^a - related terms around the mean density (the latter of which would simply disappear completely in that case).
Such higher-order terms are typically linearized (or ignored) in well-controlled closures in the vicinity of the isotropic/nematic transition (e.g., within Boltzmann–Ginzburg–Landau approach <cit.>).
However, our observations hint that this linearization procedure, widely used in the field of active nematics, may result in some physical processes not being accounted for by the resulting models, which in turn can leads to some phenomena (e.g., such as CTDs) escaping the researchers' gaze as well.
To obtain the equations of motion presented in the main text we simply combine (<ref>) and (<ref>) and re-normalize density by the critical one ϕ=ρ/ρ_n. The coefficients are also renamed accordingly: κ̃_ρ→κ_ϕ, etc.
As discussed in the main text, the hydrodynamic model allows to directly access the direction and magnitude of the anisotropic active flux -∂_j(χ Q_ij). To complement the illustration of this flux in Fig. 4(d) of the main text, we show in Fig. <ref> a direct plot of this observable as recorded in the hydrodynamic model.
§
Movie S1
Constantly undulating nematic lanes in an agent-based simulation.
(Parameters are: ρ L^2=3.15, L_p=11.1. Scale-bar: 15L. Density averaged over a time of 15.9 for better visibility.)
Movie S2
Emergence of a multitude of condensed topological defects in agent-based simulations. Note that the lateral movement of lanes happens on long timescales. A single frame roughly corresponds to the time of 162 a straight moving particle with a velocity of v_0 needs to cross the whole system. (Parameters are: ρ L^2=3.2, L_p=11.9. Scale-bar: 15L. Density averaged over a time of 15.9 for better visibility.)
Movie S3
Two condensed topological defects are formed simultaneously in an agent-based simulation. Due to continued undulation of the connecting nematic lanes the defects eventually disintegrate.
(Parameters are: ρ L^2=3.47, L_p=11.1. Scale-bar: 15L. Density averaged over a time of 3 for better visibility.)
Movie S4
Several filamentous arc ejection develop in succession along a nematic lane in an agent-based simulation.
(Parameters are: ρ L^2=2.7, L_p=14.3. Scale-bar: 15L. Density averaged over a time of 15.9 for better visibility.)
Movie S5
Straight and stable nematic lane in an agent-based simulation.
(Parameters are: ρ L^2=1.9, L_p=20.6. Scale-bar: 15L. Density averaged over a time of 15.9 for better visibility.)
Movie S6
Details of a flux in an agent-based simulation from one arm of a condensed topological defect to the two others. The path that is taken by the polymer heads is traced out. Only trajectories that start in the upper left arm and eventually will go to either the lower or upper right arm are visible.
(Parameters are: ρ L^2=3.5, L_p=11.1.)
Movie S7
Emergence of a multitude of condensed topological defects in a simulation of the hydrodynamic model. (Parameters are: β=0.05, κ_ϕ=0.2, ω^a=-0.5, χ_ϕ=0.4, ν_ϕ=1, λ=1,⟨ϕ⟩=1.1 )
Movie S8
Several filamentous arc ejection develop in succession along a nematic lane in a simulation of the hydrodynamic model. (Parameters are: β=0.05, κ_ϕ=0.2, ω^a=-0.5, χ_ϕ=0.4, ν_ϕ=1, λ=1.2,⟨ϕ⟩=1.1)
Movie S9
Straight and stable nematic lane in a simulation of the hydrodynamic model.
(Parameters are: β=0.05, κ_ϕ=0.2, ω^a=-0.5, χ_ϕ=0.4, ν_ϕ=1, λ=1.4,⟨ϕ⟩=1.1)
Movie S10
Three-beam symmetrical arrangement of sources of polar particles. The ensuing nematic currents eventually form a condensed topological defect.
(Parameters are: ρ L^2=3.6, L_p=14.3. Scale-bar: 15L. Density averaged over a time of 15.9 for better visibility.)
|
http://arxiv.org/abs/2307.09367v1 | 20230714134844 | LEST: Large-scale LiDAR Semantic Segmentation with Transformer | [
"Chuanyu Luo",
"Nuo Cheng",
"Sikun Ma",
"Han Li",
"Xiaohan Li",
"Shengguang Lei",
"Pu Li"
] | cs.CV | [
"cs.CV"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
0000–0000/00$00.00 2021 IEEE
LEST: Large-scale LiDAR Semantic Segmentation with Transformer
Chuanyu Luo, Nuo Cheng, Sikun Ma, Han Li, Xiaohan Li, Shengguang Lei, Pu Li
Chuanyu Luo, Nuo Cheng are with the LiangDao GmbH, Berlin, 12099, Germany and also with the Ilmenau University of Technology, Ilmenau, 98693, Germany (email: [email protected]; [email protected]).
Sikun Ma, Han Li, Xiaohan Li, Shengguang Lei are with the LiangDao GmbH, Berlin, 12099, Germany (email: [email protected]; [email protected];[email protected];[email protected])
Pu Li is with the Ilmenau University of Technology, Ilmenau, 98693, Germany (email: [email protected]).
August 12, 2023
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Large-scale LiDAR-based point cloud semantic segmentation is a critical task in autonomous driving perception. Almost all of the previous state-of-the-art LiDAR semantic segmentation methods are variants of sparse 3D convolution. Although the Transformer architecture is becoming popular in the field of natural language processing and 2D computer vision, its application to large-scale point cloud semantic segmentation is still limited. In this paper, we propose a LiDAR sEmantic Segmentation architecture with pure Transformer, LEST. LEST comprises two novel components: a Space Filling Curve (SFC) Grouping strategy and a Distance-based Cosine Linear Transformer, DISCO. On the public nuScenes semantic segmentation validation set and SemanticKITTI test set, our model outperforms all the other state-of-the-art methods.
Point cloud semantic segmentation, representation learning, long sequence modeling, linear Transformer.
§ INTRODUCTION
In an autonomous driving system, LiDAR-based point cloud 3D environment perception is important for safe and reliable driving. Unlike image-based 2D perception tasks, the large-scale point cloud is irregular, sparse and unordered. The 3D environment perception includes tasks such as 3D object detection and point cloud semantic segmentation.
Unlike the 3D object detection task, the 3D semantic segmentation task usually requires more granular and spatial information, and these requirements make the semantic segmentation task more challenging.
In deep learning-based 3D perception approaches, the pioneering work PointNet <cit.> is the first to aggregate the local unordered points features by a symmetric function, max-pooling. PointPillars <cit.> applies a simple PointNet to each pillar and uses it to learn a representation of point clouds in a pillar. The pillars are then mapped into a 2D Bird’s-Eye-View (BEV). A series of dense 2D convolution layers is further used for 3D object detection. However, mapping 3D objects to 2D BEV could result in significant information loss, especially for small objects in the semantic segmentation task.
Traditional dense 3D convolution is inefficient for processing 3D sparse data. In 3D object detection, SECOND <cit.> introduces a sparse 3D convolution operator to address this issue. Following this, Polar-Coordinate-System-based Cylinder3D <cit.> and Neural-Architecture-Search-based SPVNAS <cit.> apply the sparse 3D convolution to the 3D semantic segmentation task and achieve state-of-the-art results.
Although Transformer <cit.> is dominant in natural language processing (NLP), and has become popular in the image-based 2D computer vision field, its application to large-scale 3D point cloud is still limited. However, a voxel in 3D perception has a similar representation as a word in NLP, as both can be generalized as a token with high-dimensional features learned through training. The comparison of self-attention and convolution for voxels is shown in Figure <ref>.
One of the challenges when applying the self-attention mechanism to voxels/tokens is the extremely large number of voxels. The vanilla Transformer has quadratic complexity in terms of the number of voxels. To address this issue, inspired by the Swin-Transformer <cit.> in image tasks, SST <cit.> separates the voxels into rectangle windows, and the complexity is reduced by using self-attention only within each window. The shifted window method is then used to expand the receptive field across different windows.
However, one limitation of the SST is that the number of voxels in each fixed-size window is significantly different due to the varying density of the point cloud. This variation in the number of voxels can lead to inefficiencies in parallel training and inference, as well as increased memory usage. Furthermore, the shifted window-based method can be generalized as an extension of ensemble models, and the key advantage, global receptive field, from Transformer is not theoretically guaranteed.
In long range sequences tasks in NLP, where thousands of words are processed simultaneously, linear Transformer methods are more popular. Linear Transformers have only linear complexity in the number of tokens and have a theoretical global receptive field. The key idea of Linear Transformers is to decompose the softmax operator of the self-attention module into a linear form <cit.>.
In our paper, we propose a Space Filling Curve (SFC) Grouping strategy to efficiently separate the voxels into multiple groups and aggregate the local voxels features in each group by a downstream vanilla Transformer. Additionally, we propose a novel linear Transformer to build a global receptive field with only linear complexity and strong representation ability.
Our contributions can be summarized as follows:
1. We propose a voxel-based Transformer-based 3D backbone for LiDAR semantic segmentation task, and achieves impressive results compared with the other state-of-the-art method.
2. A novel SFC Grouping strategy is proposed, and the voxel local features can be aggregated within a group. It is proved that, the combination of our grouping strategy with the vanilla Transformer has the lowest expected value of the complexity.
3. We propose a novel Linear Transformer method with global receptive field but only linear complexity. Linear Transformer is popular in NLP especially for long range sequences task. As far as we know, we are the first to unify the 3D perception task in Computer Vision with the long range sequences task <cit.> in NLP. The proposed unified method can reduce the domain gaps between CV and NLP research.
§ RELATED WORK
§.§ Large-scale point cloud semantic segmentation.
In the large-scale point cloud semantic segmentation task, the mainstream approaches include point-based methods, projection-based methods and voxel-based methods.
§.§.§ Point-based method.
Most point-based methods pipeline includes point sampling, neighbors searching, features aggregation and classification <cit.>. One key disadvantage of point-based methods is that the inefficient neighbor searching method like K-Nearest Neighbors (KNN) is recursively used. Although MVP-Net <cit.> replaces the KNN method by Space Filling Curves for high efficiency, the performance of point-based methods is still limited compared to voxel-based methods.
§.§.§ Projection-based method.
To leverage the success of 2D images, projection-based methods map the 3D points to a 2D pseudo-image, aggregate the features from neighboring pixels, and then inversely map the pixels to the 3D point cloud. The projection-based methods <cit.> map the point cloud to a spherical projection, and the PolarNet <cit.> maps the point cloud to a polar BEV. However, the information loss due to the 3D-to-2D projection limits the performance of projection-based methods.
§.§.§ Voxel-based method.
Voxel-based methods <cit.> <cit.> <cit.> use PointNet to learn voxel representations from points within each voxel. The voxels features are then aggregated using Sparse 3D Convolution <cit.>, which is efficient for sparse data and incorporates priori knowledge of the voxel and its neighbors. In large-scale scene, 3D convolution has limited receptive field and cubic complexity on the size of the convolution kernel.
§.§ Space filling curves grouping.
Space filling curves (SFC) is a sorting method to map the high dimensional data to one dimensional sequence while preserving the locality of the data points <cit.>. One of the widely used and high efficient SFC method is the Morton-order <cit.>, also known as Z-order because of the curve shape in the 2D case. Along the sorted by SFC sequences, the data points can be separated into different groups efficiently, and all the groups have almost the same number of data points, as illustrated in Figure <ref>.
§.§ Transformers in vision tasks.
Transformer <cit.> is firstly proposed in the NLP field. In the 2D computer vision tasks, ViT <cit.> splits the image into patches and then uses the vanilla Transformer. PVT <cit.> is the first hierarchical design for ViT and is used in various dense prediction tasks like 2D object detection and semantic segmentation tasks. Swin Transformer <cit.> is a multi-stage hierarchical architecture, and use Transformer in gradually shifted windows to extend the pixels receptive field.
In the 3D object detection task, VoTr <cit.> proposes the first Transformer-based model. In VoTr, a GPU-based hash table is used to search neighboring voxels, with each voxel serving as a query in the self-attention module. Most related work to our proposed SFC-Grouping method is the SST <cit.>, a 3D object detection architecture. SST firstly pillarize the LiDAR points, and then gradually shifted window-based groups the pillars. Transformer is used in each group to aggregate the pillar features. However, window-based method requires extra high memory usage and not feasible in 3D semantic segmentation task. Another problem is that though the group is gradually shifted, the tokens can still not have a real global receptive field.
§.§ Transformer with linear complexity
Dot-product attention with softmax normalization in Transformer self-attention module is the key to have long range dependency and global receptive field. However, the quadratic complexity of self-attention module makes it impossible to long sequence tasks in NLP, or 3D semantic segmentation tasks that include thousands of voxels.
Recently, many works are proposed to make the Transformer more efficient and has only linear complexity. Kernel-based linear Transformer <cit.> uses kernel function to approximate softmax normalization to linearize the computation in self-attention. SOFT <cit.> propose a softmax-free Transformer and use the Gaussian kernel function to replace the dot-product similarity. The most related work to our proposed DISCO is CosFormer <cit.>, which replaces the softmax operator by two attention properties: non-negativeness and non-linear re-weighting scheme. Like the vanilla Transformer, CosFormer still uses the dot-product as tokens similarity. In our proposed DISCO module, the similarity is the 1-norm distance between tokens and is showed to have better performance than dot-product similarity in CosFormer in ablation studies.
.
§ METHODOLOGY
To tackle the semantic segmentation tasks in a large-scale LiDAR-based scenario, we propose a pure Transformer-based architecture called LEST. LEST includes two novel components: a Space Filling Curves (SFC) Grouping Transformer, which is proposed to build voxels internal interaction within a group, and a Distance Cosine linear Transformer (DISCO), which is proposed to have one global receptive field across groups. The whole architecture is shown in Figure <ref>.
§.§ Space Filling Curves Grouping
In section <ref>, we introduced the Space Filling Curves (SFC) method, which is used to sort high-dimensional data as a 1D sequence. In this work, we use the SFC method to group nearby voxels together. Figure <ref> shows a comparison between the commonly used Window Grouping method <cit.> and our proposed SFC Grouping method. After the voxels are grouped, a vanilla Transformer is used for each group. By using the shifted grouping method, the receptive field of a voxel is expanded.
An advantage of the SFC grouping method is that it ensures each group contains a similar number of voxels. In contrast, previous grouping methods such as window-based grouping or K-Means clustering grouping can result in unbalanced voxel distributions among groups. One example is shown in Figure <ref>.
§.§.§ Complexity analysis if sequentially processing
The complexity if processing with the vanilla Transformer is analyzed here. Let N denotes the number of all voxels, and G indicates the number of groups. For any grouping method, X is an random variable indicating the number of voxels in a group. If the number of groups G is enough large, obviously we have this equation E(X) ≈ X_1 + X_2 + ⋯ + X_G/G = N/G.
The vanilla Transformer has quadratic complexity on the number of tokens in one group as O(X^2). If we make the vanilla Transformer sequentially process all groups, the complexity is O(X_1^2 + X_2^2 + ⋯ + X_G^2). With Transformer, the expected value of complexity if using any grouping method is O_any = E(X_1^2 + X_2^2 + ⋯ + X_G^2) = G · E(X^2). As SFC grouping method guarantees that all groups has almost the same number of voxels as N/G. The expected value of complexity if using SFC grouping is O_SFC = G · (N/G)^2 = G · E^2(X). For any random variable X, it is not hard to prove that E(X^2) = Var(X) + E^2(X). Here Var is the variance. The complexity difference is shown in Equation <ref>
O_any - O_SFC = G · E(X^2) - G · E^2(X)
= G · Var(X) + G · E^2(X) - G · E^2(X)
= G · Var(X) ⩾ 0.
From Equation <ref>, it can be observed that any grouping method has equal or higher complexity than the SFC grouping method. The more unbalanced the grouping strategy is, the higher the expected complexity of the downstream vanilla Transformer module.
§.§.§ Complexity analysis if parallelly processing
Instead of sequentially processing data with the Transformer, parallelly processing can be more efficient on GPU. In this parallel case, the voxels in each group are padded to match the maximum number of voxels in all groups.
Let M denote the maximum number of voxels in a group. The complexity of the vanilla Transformer now is GM^2. Using SFC grouping, M_SFC≈N/G, and the complexity is approximately only O_SFC = GM_SFC^2 ≈ GN^2/G^2 = N^2/G. Compared to the method without grouping, the complexity is reduced evidently from N^2 to N^2/G. From Figure <ref> it can be obeserved, compared to the other grouping method complexity GM^2 like the window-based method <cit.>, the maximum number of voxels in group M_Window≫ M_SFC, and the downstream Transformer complexity is also much larger than the SFC grouping method as GM_Window ^2 ≫ GM_SFC^2 ≈N^2/G.
Note that in window-based method SST <cit.> in 3D object detection task, it is shown that much GPU memory is used. In the LiDAR semantic segmentation task, which requires more granular information with smaller voxel size and a larger number of voxels, the window-based grouping method is not feasible.
§.§ Linear Transformer background
The Scaled Dot Product Attention is one of the key properties of the Transformer <cit.> model. It computes the dot product of the queries with all the keys and applies a softmax function to normalize the attention weights for each query-key pair.
Let x ∈ℝ ^ N × C denotes a sequence of N tokens with features dimension C. The input sequence x can be projected by there learn-able matrices W_Q ∈ℝ ^ C × D, W_K ∈ℝ ^ C × D and W_V ∈ℝ ^ C × D to the corresponding matrices Q, K and V as follows:
Q = xW_Q
K = xW_K
V = xW_V.
The output matrix O ∈ℝ ^ N × D of attention module can be computed as:
O = softmax(QK^T/√(D))V.
It is not hard to prove that the Scaled Dot Product Attention has space and time complexity as O(N^2D), which prohibits the scale-up ability if existing many tokens. By naively removing the softmax operator in Equation <ref>, and rewriting it as Equation <ref>
O = QK^T/√(D) V = QK^TV/√(D) = Q(K^TV)/√(D),
the new form O = Q(K^TV)/√(D) has only space and time complexity as O(ND^2). If N ≫ D, the complexity of Equation <ref> is O(N). The complexity of the vanilla self-attention and the linearized self-attention is further illustrated in Figure <ref> <cit.>.
However, removing the softmax function directly will cause the attention matrix elements not always be positive and not normalized.
We use M_i here to represent the ith-row of a general matrix M. Equation <ref> can be generally rewritten as:
O_i = ∑_j=1^NSim(Q_i, K_j)/∑_j=1^N Sim(Q_i, K_j) V_j.
Here Sim(Q_i, K_j) indicates the similarity of query Q_i and key K_j. In Equation <ref>, the similarity function of query Q_i and key K_j is its exponential dot product.
Previous work like Linear Transformer <cit.> uses kernel function ϕ(x) = elu(x) + 1 to approximate the softmax operator, where elu(x) denotes the exponential linear unit <cit.> activation function. The complete attention function is
O^'_i = ∑_j=1^Nϕ(Q_i)ϕ(K_j^T)V_j/∑_j=1^Nϕ(Q_i)ϕ(K_j^T)
= ϕ(Q_i)∑_j=1^Nϕ(K_j^T)V_j/ϕ(Q_i)∑_j=1^Nϕ(K_j^T).
Like the Equation <ref>, the Equation <ref> complexity is O(N)
Instead of approximating the softmax function, CosFormer <cit.> is based on two properties of attention matrix: non-negativeness and non-linear re-weighting ability. It proposes a decomposed similarity function with linear complexity as
Sim(Q_i, K_j) = Q^'_i K^'T_jcos(π/2×i-j/M)
Q_i^' = ReLU(Q_i)
K_j^' = ReLU(K_j).
Here if N denotes the number of all tokens, M is a hyper-parameter satisfying M ≥ N. The cos(π/2×i-j/M) item indicates the index space distance between Q_i and K_j.
§.§ Distance cosine linear Transformer
In the architecture as shown in Figure <ref>, we use a novel linear Transfomer, Distance Cosine Linear Transformer (DISCO), to build a voxel global receptive field. In the vanilla Transformer, the similarity between Q_i and K_j is the corresponding dot product like Sim(Q_i, K_j) = Q_i · K_j. The softmax operator is then used to reweigh and normalize the similarity as attention. In addition to the dot product similarity, the cosine similarity of vectors is also widely used <cit.>. Instead of the dot product similarity and cosine similarity, we use the distance of vectors here as a similarity measure. Figure <ref> shows an example that distance is a better similarity measure than dot product and cosine similarity in terms of vector magnitude influence. Cosine similarity does not consider the magnitude of vectors, and the dot product similarity is not robust when one vector's magnitude is extremely large.
The commonly used distance measure is the 1-norm, the taxicab norm, or the 2-norm, the Euclidean norm. Let D_ij denote the 1-norm distance between vector Q_i and K_j. If Q_i = (Q_i1, Q_i2, ⋯, Q_in), and K_j = (K_j1, K_j2, ⋯, K_jn), the 1-norm distance D_ij is
D_ij = Q_i - K_j_1 = ∑_p=1^n|Q_ip - K_jp|.
The similarity of vector Q_i and K_j can be defined as the function Sim(Q_i, K_j) = f(D_ij). The function f, which maps the distance to the similarity, must have at least the following two requirements:
* The function f must be a monotonically non-increasing function.
* The output of function f(D_ij) must be positive, as a negative similarity is meaningless. The negative attention also destabilizes the Transformer performance <cit.>.
To make the Transformer can be decomposed and have linear complexity, the third important requirement is that the function f must can be decomposed, which means
Sim(Q_i, K_j) = f(D(Q_i, K_j)) = ϕ(Q_i)φ(K_j).
To satisfy these three above requirements, here we propose the Equation <ref>, as a map from distance to similarity.
Sim(Q_i, K_j) = ∑_p=1^ncos(π/4|Q̂_ip - K̂_jp|)
Q̂_ip = Tanh(Q_ip)
K̂_ip = Tanh(K_jp)
Tanh(x) = e^x - e^-x/e^x + e^-x
For any x ∈ℝ, -1 < Tanh(x) < 1, and the following relation always hold.
-2 < Q̂_ip - K̂_jp < 2
0 < π/4|Q̂_ip - K̂_jp| < π/2
Note that Tanh(x) is a monotonically increasing function, and in domain x ∈ (0, π/2), cos(x) is a monotonically decreasing function, so the map function f(x) = cos(Tanh(x)), from distance to similarity, is monotonically decreasing. The listed first requirement is satisfied.
As π/4|Q̂_ip - K̂_jp|∈(0, π/2), Sim(Q_i, K_j) = ∑_p=1^ncos(π/4|Q̂_ip - K̂_jp|) > 0, the listed second requirement is satisfied.
Because cos(x) is an even function, the absolute value symbol in Equation <ref> and <ref> can be removed. As a result, the similarity function can be decomposed. Let Q_ip^cos = cos(π/4(Q̂_ip)), K_jp^cos = cos(π/4(K̂_jp)), Q_ip^sin = sin(π/4(Q̂_ip)), K_jp^sin = sin(π/4(K̂_jp)), the decomposed similarity function is proved in Equation <ref>. The introduced third requirement, decomposed possibility, is now satisfied.
Sim(Q_i, K_j) = ∑_p=1^ncos(π/4(Q̂_ip - K̂_jp))
= ∑_p=1^ncos(π/4(Q̂_ip))cos(π/4(K̂_jp))
+ ∑_p=1^nsin(π/4(Q̂_ip))sin(π/4(K̂_jp))
= ∑_p=1^nQ_ip^cosK_jp^cos + ∑_p=1^nQ_ip^sinK_jp^sin
Now, the Q(Disco(K, C)) in architecture Figure <ref> is defined as in the Equation <ref>.
O_i = ∑_j=1^NSim(Q_i, K_j)/∑_j=1^N Sim(Q_i, K_j) V_j
= ∑_j=1^N∑_p=1^ncos(π/4(Q̂_ip - K̂_jp)))/∑_j=1^N∑_p=1^ncos(π/4(Q̂_ip - K̂_jp)) V_j
= ∑_j=1^N∑_p=1^nQ_ip^cosK_jp^cos + ∑_p=1^nQ_ip^sinK_jp^sin/∑_j=1^N∑_p=1^nQ_ip^cosK_jp^cos + ∑_j=1^N∑_p=1^nQ_ip^sinK_jp^sinV_j
= ∑_p=1^nQ_ip^cos(∑_j=1^NK_jp^cosV_j) + ∑_p=1^nQ_ip^sin(∑_j=1^NK_jp^sinV_j)/∑_j=1^N∑_p=1^nQ_ip^cosK_jp^cos + ∑_j=1^N∑_p=1^nQ_ip^sinK_jp^sin
def= Q_i(Disco(K, C))
Equation <ref> is the proposed attention function, and it is an extension of the general Equation <ref> and <ref> with only linear complexity O(N).
In our architecture in Figure <ref>, the SFC grouping Transformer is used to build group internal receptive field. Although the shifted group method is used, the receptive field is only limited expanded. With the proposed DISCO module, the voxel has a real global receptive field with only linear complexity O(N).
§.§ Channel attention module and decoder
Let x_SFC∈ℝ ^ N × C denotes the output of SFC-Grouping Transformer module, and x_DISCO∈ℝ ^ N × C denotes the output of DISCO module. Here N is the number of voxels, C is the number of channels.
The x_SFC and x_DISCO are at first concatenated in channel dimension. The concatenated output is denoted as x ∈ℝ ^ N × 2C.
Like the similar idea in <cit.>, a channel descriptor w ∈ℝ ^ 2C is calculated by squeezing the global spatial information.
Let x_ij denote the i-th voxel and its j-th channel, and a ∈ℝ ^ 2C denotes the maxpooling output from x. For any j, a_j = max({x_ij | i ∈{1, 2, 3 ... N}}). The descriptor w ∈ℝ ^ 2C is then calculated as the softmax output of a.
The channel attention module output is denoted as o ∈ℝ ^ N × 2C. For any o_ij, it is the product of the input x_ij and the descriptor w_j like
o_ij = x_ijw_j.
The channel attention module output is then decoded by a simple multi-layer perceptron network, and the label of each point is further predicted.
§ EXPERIMENTS
In this section, we provide the experiments results at first. The model is trained and evaluated on the two large-scale LiDAR-based semantic segmentation datasets, SemanticKITTI <cit.> and nuScenes <cit.>. The results are then compared with other state-of-the-art approaches and the performance differences are analyzed. Finally, a series of ablation studies are conducted to validate the proposed modules.
§.§ Datasets and evaluation metric
§.§.§ SemanticKITTI
SemanticKITTI is a large-scale LiDAR-based semantic segmentation dataset. The point cloud data is derived from the KITTI <cit.> Vision Odometry Benchmark. Point-wise annotations are labeled for the complete 360° field-of-view of the employed Velodyne-HDLE64 LiDAR. This dataset consisits of 22 sequences. The 00-07, 09, and 10 sequences are commonly used for training, and the 08 sequence is used for validation. The rest 11-21 sequences are used as test set. After officially merging similar classes and ignoring classes with too few points, 19 classes are evaluated in the single scan perception task.
§.§.§ nuScenes
The nuScenes dataset is a multimodal dataset for autonomous driving. It comprises 1000 scenes of 20 seconds duration data from a 32-beams LiDAR sensor. This dataset is officially split into a training set and a validation set. In our work, the model is trained on the training set and evaluated on the validation set. Similarly to SemanticKITTI, classes with too few points are ignored and similar classes are merged during training and evaluation. In total, 16 classes are trained and evaluated in our approach.
§.§.§ Evaluation metric
The mean intersection-over-union (mIoU) over all classes is widely used as evaluation metric. It is formulated as
mIoU = 1/C∑_i=1^CTP_i/TP_i + FP_i + FN_i.
In Equation <ref>, C is the number of the classes. TP_i, FP_i, FN_i are the true positive, false positive, false negative predictions for class i.
§.§ Results on SemanticKITTI
In this section, our approach is compared with the other LiDAR-only state-of-the-art approaches, including point-based method, projection-based method and voxel-based method. The results on the SemanticKITTI test set is shown in Table <ref>.
Compared to all the other point-based <cit.>, projection-based <cit.> and voxel-based method <cit.>, our approach has significant performance improvement in terms of mIoU.
Note that the current voxel-based methods are actually sparse-3D-convolution-plus method. JS3C-Net <cit.> uses sparse 3D convolution and takes advantage of multiple-frames information. SPVNAS <cit.> uses neural architecture search (NAS) method to find out the best sparse 3D convolution network architecture. Cylinder3D <cit.> uses the cylindrical partitions instead of the normal 3D voxels, and process these cylindrical partitions with the vanilla sparse 3D convolution.
However, our method can be used as an alternative to the sparse 3D convolution. Compared to the SparseConv baseline <cit.>, our method has a 7.9% absolute mIoU improvement.
The other methods such as using multiple-frames information, NAS method, cylindrical partition and image information <cit.>, can be further applied to our current method in future work to improve the performance. Our method uses only the normal 3D voxels and single frame, which is LiDAR-independent and more compatible to the other state-of-the-art methods <cit.> in multi-tasks learning like 3D object detection.
§.§ Results on nuScenes
In this section, our method is compared with the other methods on nuScenes validation set. The result is shown in Table <ref>. Our proposed LEST model performs better than the other methods, especially on the small object like bicycle, motorcycle and pedestrian. Compared to the SemanticKITTI equipped with 64-beams LiDAR, nuScenes with 32-beams LiDAR has fewer points per scan. As a result, our model LEST can have larger perceptive field in SFC grouping branch with the same limited GPU resource.
§.§ Qualitative results
Figure <ref> shows the qualitative results of our model's prediction and the ground-truth. Unlike some works <cit.>, our method is fully sparse and can have unlimited range in training and prediction. As a result, even the unlabeled points can be classified. The qualitative results also shows one limitation of our method: the local points cannot be correctly classified as the same sometimes. The reason is the geometry information loss caused by SFC even we use shifted SFC and DISCO.
§.§ Ablation studies
In this section, all the proposed components will be validated. The training dataset is the nuScenes training set and the validation dataset is the nuScenes validation set.
Our proposed LEST model consists of the shifted SFC-Grouping Transformer, the DISCO module and a channel attention module. The shifted SFC-Grouping is validated by comparing with removing the shifted SFG-Grouping module completely or using only single SFG-Grouping. The DISCO module is validated by removing it or replacing it with the other state-of-the-art linear Transformers like CosFormer <cit.> and kernel function based Linear Transformer <cit.>. The channel attention module is validated by removing it and directly concatenating the features from multiple branches. The results are listed in Table <ref>.
From the ablation experiments 1st row in Table <ref>, it can be observed that removing the SFC-Grouping branch and using only the DISCO module has poor performance. One reason is the Low-Rank Bottleneck <cit.> in Transformer. In the vanilla Transformer, let x ∈ℝ ^ N × C denotes N tokens with features dimension C, and the learn-able matrices are W_Q ∈ℝ ^ C × D, W_K ∈ℝ ^ C × D. In <cit.> it is proved that, for any x and one arbitrary positive column stochastic matrix P∈ℝ ^ N × N, if D ⩾ N, there always exist the matrix W_Q, W_K satisfying that softmax((xW_Q)(xW_K)^T/√(D)) = P. If D < N, there exist X and P such that this equation does not hold for all W_Q and W_K. In linear Transformer scenario, D ≪ N, and the Low-Rank problem is worse.
The 2nd row in Table <ref> shows that the used shifted grouping method performs better than using only one group. The reason is that the receptive field is expanded in shifted method. The 3rd-5th rows show that our proposed linear Transformer, DISCO, performs better than the other state-of-the-art linear Transformer <cit.> <cit.>. The 6th row shows that the attention module is necessary to aggregate the features from multiple branches.
§ CONCLUSION AND OUTLOOK
In this paper, we propose a novel pure Transformer architecture, LEST, in LiDAR-based semantic segmentation tasks. LEST consists of the SFC-Grouping module and the DISCO module, a distance-based Transformer with linear complexity. Compared to the other semantic segmentation models, LEST performs impressively and can be regarded as an alternative to the widely used sparse 3D convolution.
With the proposed pure Transformer architecture, we would like to reduce the domain gap between 3D Computer Vision and the Natural Language Processing (NLP) field. The proposed linear Transformer, DISCO, can be also used and evaluated in NLP field in future work, especially in the long range sequences tasks.
1
IEEEtran
qi2017pointnetQi, C., Su, H., Mo, K. & Guibas, L. Pointnet: Deep learning on point sets for 3d classification and segmentation. Proceedings Of The IEEE Conference On Computer Vision And Pattern Recognition. pp. 652-660 (2017)
lang2019pointpillarsLang, A., Vora, S., Caesar, H., Zhou, L., Yang, J. & Beijbom, O. Pointpillars: Fast encoders for object detection from point clouds. Proceedings Of The IEEE/CVF Conference On Computer Vision And Pattern Recognition. pp. 12697-12705 (2019)
yan2018secondYan, Y., Mao, Y. & Li, B. Second: Sparsely embedded convolutional detection. Sensors. 18, 3337 (2018)
Zhu_2021_cylinder3dZhu, X., Zhou, H., Wang, T., Hong, F., Ma, Y., Li, W., Li, H. & Lin, D. Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR Segmentation. Proceedings Of The IEEE/CVF Conference On Computer Vision And Pattern Recognition (CVPR). pp. 9939-9948 (2021,6)
spvnasTang, H., Liu, Z., Zhao, S., Lin, Y., Lin, J., Wang, H. & Han, S. Searching efficient 3d architectures with sparse point-voxel convolution. European Conference On Computer Vision. pp. 685-702 (2020)
vanilla_transformerVaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A., Kaiser, Ł. & Polosukhin, I. Attention is all you need. Advances In Neural Information Processing Systems. 30 (2017)
liu2021swinTransformerLiu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S. & Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings Of The IEEE/CVF International Conference On Computer Vision. pp. 10012-10022 (2021)
zhen2022cosformerQin, Z., Sun, W., Deng, H., Li, D., Wei, Y., Lv, B., Yan, J., Kong, L. & Zhong, Y. cosFormer: Rethinking Softmax In Attention. International Conference On Learning Representations. (2022), https://openreview.net/forum?id=Bl8CQrx2Up4
fan2022_SSTFan, L., Pang, Z., Zhang, T., Wang, Y., Zhao, H., Wang, F., Wang, N. & Zhang, Z. Embracing single stride 3d object detector with sparse transformer. Proceedings Of The IEEE/CVF Conference On Computer Vision And Pattern Recognition. pp. 8458-8468 (2022)
Luo2022A03_mvpNEtChuanyu, L., Xiaohan, L., Nuo, C., Han, L., Shengguang, L. & Pu, L. MVP-Net: Multiple View Pointwise Semantic Segmentation of Large-Scale Point Clouds. Journal Of WSCG. 30 pp. 1-8 (2022)
wu2019squeezesegv2Wu, B., Zhou, X., Zhao, S., Yue, X. & Keutzer, K. Squeezesegv2: Improved model structure and unsupervised domain adaptation for road-object segmentation from a lidar point cloud. 2019 International Conference On Robotics And Automation (ICRA). pp. 4376-4382 (2019)
milioto2019rangenet++Milioto, A., Vizzo, I., Behley, J. & Stachniss, C. Rangenet++: Fast and accurate lidar semantic segmentation. 2019 IEEE/RSJ International Conference On Intelligent Robots And Systems (IROS). pp. 4213-4220 (2019)
zhang2020polarnetZhang, Y., Zhou, Z., David, P., Yue, X., Xi, Z., Gong, B. & Foroosh, H. Polarnet: An improved grid representation for online lidar point clouds semantic segmentation. Proceedings Of The IEEE/CVF Conference On Computer Vision And Pattern Recognition. pp. 9601-9610 (2020)
dosovitskiy2021an_vitDosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J. & Houlsby, N. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. International Conference On Learning Representations. (2021), https://openreview.net/forum?id=YicbFdNTTy
mao2021voxel_votrMao, J., Xue, Y., Niu, M., Bai, H., Feng, J., Liang, X., Xu, H. & Xu, C. Voxel transformer for 3d object detection. Proceedings Of The IEEE/CVF International Conference On Computer Vision. pp. 3164-3173 (2021)
katharopoulos2020linear_transformerKatharopoulos, A., Vyas, A., Pappas, N. & Fleuret, F. Transformers are rnns: Fast autoregressive transformers with linear attention. International Conference On Machine Learning. pp. 5156-5165 (2020)
eluClevert, D., Unterthiner, T. & Hochreiter, S. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). 4th International Conference On Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. (2016)
randLAHu, Q., Yang, B., Xie, L., Rosa, S., Guo, Y., Wang, Z., Trigoni, N. & Markham, A. RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds. IEEE/CVF Conference On Computer Vision And Pattern Recognition (CVPR). (2020,6)
Thabet_2020_CVPR_Workshops_mortonThabet, A., Alwassel, H. & Ghanem, B. Self-Supervised Learning of Local Features in 3D Point Clouds. Proceedings Of The IEEE/CVF Conference On Computer Vision And Pattern Recognition (CVPR) Workshops. (2020,6)
morton1966computerMorton, G. A Computer Oriented Geodetic Data Base and a New Technique in File Sequencing. (International Business Machines Company,1966), https://books.google.de/books?id=9FFdHAAACAAJ
chaneel_attentionHu, J., Shen, L. & Sun, G. Squeeze-and-excitation networks. Proceedings Of The IEEE Conference On Computer Vision And Pattern Recognition. pp. 7132-7141 (2018)
zhou2021deepvitZhou, D., Kang, B., Jin, X., Yang, L., Lian, X., Hou, Q. & Feng, J. DeepViT: Towards Deeper Vision Transformer. ArXiv Preprint ArXiv:2103.11886. (2021)
behley2019semantickittiBehley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Stachniss, C. & Gall, J. Semantickitti: A dataset for semantic scene understanding of lidar sequences. Proceedings Of The IEEE/CVF International Conference On Computer Vision. pp. 9297-9307 (2019)
nuscenes_odCaesar, H., Bankiti, V., Lang, A., Vora, S., Liong, V., Xu, Q., Krishnan, A., Pan, Y., Baldan, G. & Beijbom, O. nuScenes: A multimodal dataset for autonomous driving. CVPR. (2020)
fong2022nuscenes_panopticFong, W., Mohan, R., Hurtado, J., Zhou, L., Caesar, H., Beijbom, O. & Valada, A. Panoptic nuscenes: A large-scale benchmark for lidar panoptic segmentation and tracking. IEEE Robotics And Automation Letters. 7, 3795-3802 (2022)
geiger2012kittiGeiger, A., Lenz, P. & Urtasun, R. Are we ready for autonomous driving? the kitti vision benchmark suite. 2012 IEEE Conference On Computer Vision And Pattern Recognition. pp. 3354-3361 (2012)
qi2017pointnet++Qi, C., Yi, L., Su, H. & Guibas, L. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances In Neural Information Processing Systems. 30 (2017)
thomas2019kpconvThomas, H., Qi, C., Deschaud, J., Marcotegui, B., Goulette, F. & Guibas, L. Kpconv: Flexible and deformable convolution for point clouds. Proceedings Of The IEEE/CVF International Conference On Computer Vision. pp. 6411-6420 (2019)
xu2020squeezesegv3Xu, C., Wu, B., Wang, Z., Zhan, W., Vajda, P., Keutzer, K. & Tomizuka, M. Squeezesegv3: Spatially-adaptive convolution for efficient point-cloud segmentation. Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVIII 16. pp. 1-19 (2020)
yin2021center_pointYin, T., Zhou, X. & Krahenbuhl, P. Center-based 3d object detection and tracking. Proceedings Of The IEEE/CVF Conference On Computer Vision And Pattern Recognition. pp. 11784-11793 (2021)
cortinhal2020salsanextCortinhal, T., Tzelepis, G. & Erdal Aksoy, E. Salsanext: Fast, uncertainty-aware semantic segmentation of lidar point clouds. Advances In Visual Computing: 15th International Symposium, ISVC 2020, San Diego, CA, USA, October 5–7, 2020, Proceedings, Part II 15. pp. 207-222 (2020)
zhuang2021pmfZhuang, Z., Li, R., Jia, K., Wang, Q., Li, Y. & Tan, M. Perception-aware multi-sensor fusion for 3d lidar semantic segmentation. Proceedings Of The IEEE/CVF International Conference On Computer Vision. pp. 16280-16290 (2021)
3DSemanticSegmentationWithSubmanifoldSparseConvNetGraham, B., Engelcke, M. & Maaten, L. 3D Semantic Segmentation with Submanifold Sparse Convolutional Networks. CVPR. (2018)
yan2021JS3C-NetYan, X., Gao, J., Li, J., Zhang, R., Li, Z., Huang, R. & Cui, S. Sparse single sweep lidar point cloud segmentation via learning contextual shape priors from scene completion. Proceedings Of The AAAI Conference On Artificial Intelligence. 35, 3101-3109 (2021)
bhojanapalli2020low_rank_bottleneckBhojanapalli, S., Yun, C., Rawat, A., Reddi, S. & Kumar, S. Low-rank bottleneck in multi-head attention models. International Conference On Machine Learning. pp. 864-873 (2020)
dong2021attention_is_not_all_you_needDong, Y., Cordonnier, J. & Loukas, A. Attention is not all you need: Pure attention loses rank doubly exponentially with depth. International Conference On Machine Learning. pp. 2793-2803 (2021)
wang2021pvtWang, W., Xie, E., Li, X., Fan, D., Song, K., Liang, D., Lu, T., Luo, P. & Shao, L. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. Proceedings Of The IEEE/CVF International Conference On Computer Vision. pp. 568-578 (2021)
lu2021softLu, J., Yao, J., Zhang, J., Zhu, X., Xu, H., Gao, W., Xu, C., Xiang, T. & Zhang, L. Soft: Softmax-free transformer with linear complexity. Advances In Neural Information Processing Systems. 34 pp. 21297-21309 (2021)
tay2021lraTay, Y., Dehghani, M., Abnar, S., Shen, Y., Bahri, D., Pham, P., Rao, J., Yang, L., Ruder, S. & Metzler, D. Long Range Arena : A Benchmark for Efficient Transformers . International Conference On Learning Representations. (2021), https://openreview.net/forum?id=qVyeW-grC2k
|
http://arxiv.org/abs/2307.05673v1 | 20230711180002 | On the Bimetric Starobinsky Model | [
"Ioannis D. Gialamas",
"Kyriakos Tamvakis"
] | gr-qc | [
"gr-qc",
"astro-ph.CO",
"hep-th"
] |
=10000
α
β̱
χ̧
δ̣
ϵ
ϕ
γ
η
ıι
ȷψ
κ̨
łλ
μ
ν
øω
π
θ
ρ̊
σ
τ
ῠ
ξ
ζ
φ
κ
α
ϵ
𝒢
ℛ
|
http://arxiv.org/abs/2307.04982v1 | 20230711024151 | Ultra Electron Density Sensitivity for Surface Plasmons | [
"Wei Liu",
"Meng Li",
"Yu Niu",
"Ziren Luo"
] | physics.optics | [
"physics.optics",
"cond-mat.mes-hall"
] |
These authors contributed equally to this work.
Center for Gravitational Wave Experiment, Institute of Mechanics, Chinese Academy of Science, 15 Bei-si-huan West Road, Beijing, 100190, China.
These authors contributed equally to this work.
Key Laboratory of Analytical Chemistry for Life Science of Shaanxi Province, School of Chemistry and Chemical Engineering, Shaanxi Normal University, Xi'an, 710062, China.
[email protected]
Center for Gravitational Wave Experiment, Institute of Mechanics, Chinese Academy of Science, 15 Bei-si-huan West Road, Beijing, 100190, China.
[email protected]
Center for Gravitational Wave Experiment, Institute of Mechanics, Chinese Academy of Science, 15 Bei-si-huan West Road, Beijing, 100190, China.
We investigate surface plasmons from a solid-state standpoint and highlight their ultra electron density sensitivity. When a surface plasmon is excited on a planar gold film by an evanescent wave from 625 nm light, only a minute fraction of the surface electron density, approximately one thousandth, participates in the process. By introducing a noise-depressed surface potential modulation, we reduce the electron density to the order of 10□, enabling electron sensitivity on the order of 0.1 e. As a practical application, we develop a surface plasmon resonance imaging method capable of detecting single anions in solution at a concentration of 1.
Ultra Electron Density Sensitivity for Surface Plasmons
Ziren Luo
August 12, 2023
=======================================================
The label-free single molecule imaging technique has come into the spotlight for it can not only yield the insights about the dynamic molecule interactions that are fundamentally inaccessible to fluorescence-based methodologies but also enjoy the multiplexing capability <cit.>. Several techniques based on the light-scattering phenomenon cross the barrier of the large mismatch between the size of the molecule and the diffraction limit of visible light successfully. Interferometric scattering microscopy (iSCAT), a recently developed technique <cit.> which collects the scattered photons across a molecule and reduces the background scattering, is able to provide the molecular weight of the molecule in solution <cit.>. By measuring the inelastic light-scattering process, surface-enhanced Raman scattering (SERS) <cit.> and tip-enhanced Raman spectroscopy (TERS) <cit.> can tell the Raman "fingerprint" of the target molecule since a high degree of structural information about the molecule can be extracted from the SERS/TERS vibrational spectrum. Although these scattering-based methods have achieved a remarkable success, the scattering efficiency is restricted by the effective scattering cross section of the molecule. The optical resolution of iSCAT is claimed around 10 at room temperature in solution while that of TERS is about 315 in vacuum. Thus, it is still a challenge to observing single molecules beyond the restriction of the size under the condition when the target molecules are functionally active.
Besides the molecular size, other properties come into the picture of the single-molecule investigation among which reactions occurring at the single-molecule level is at the core. For chemical reactions, molecules transfer or share electrons at their specific electronic states: the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO). Holding electrons at the highest energy, the HOMO could transfer electrons to other molecules while the LUMO remains empty and capable of receiving electrons. Therefore, a possible strategy to bypass the scattering cross section restriction is to measure the electron density variations induced from the electronic states of the molecules at the sensing surface.
Under total internal reflection condition, the electric field of an evanescent wave can excite the collective oscillation of surface electrons, which is called surface plasmon resonance (SPR). Conventionally, a biosensor based on SPR is described by a Four-layer model governed by Snell's law: glass substrate/ plasmonic metal film/ molecular film/ buffer medium. Billions of molecules adsorbing at the sensing surface changes the thickness of the molecular film which can be measured by a SPR biosensor. However, when the concentration of the target molecules comes down to single-molecule level, less than 1 <cit.>. The model fails because it is hardly to take the sparsely adsorbate molecules as a film. As a result, the detection limit of a SPR biosensor has been considered far from the requirement of the single molecule detection. A typical refractive index detection limit for a phase-sensitive SPR scheme is reported as the order of 10^-8 to date <cit.> while the refractive index variation from 1 ions is estimated as 10^-14 in solution <cit.>. To exceed the limit, plasmon-enhanced nano-materials <cit.>, meta-surface <cit.>, and whispering-gallery-mode <cit.> are introduced to increase the sensitivity of the biosensor. Although these micro/nano-structure based solutions reveal exciting sensitivity, their applications have some fundamental restrictions. For one thing, the signal amplitude of these methods crucially depends on the location where the molecule binds with the structure while these binding events are outside the control. For another, the large-scale and reproducible production of the complicated sensing unit imposes a technical limitation. Here, we investigate surface plasmons from a solid-state perspective and demonstrate the ultra electron sensitivity of a planar SPR film to measure surface electron density variations. As an application, we have developed an an energy level aligned SPR imaging (ELA-SPRi) system to detect the adsorption and desorption of single anions in solution.
The sensitivity of SPR to the surface charge has been reported since 1970s <cit.> and used to detect charged particles at the sensing surface <cit.>. This sensitivity is believed from the agreement that the dielectric function of the metal is determined by the electron density of the metal and an applied bias potential on the surface can modulate the electron density <cit.>. However, this understanding overlooks the plasmonic side of the story. From solid-state perspective, the collective oscillation of surface electrons in Fig. <ref>(a) with the wave-vector of the surface plasmon, k_sp, occurs near the Fermi surface of the metal. In other words, the electron density involved in surface plasmon, n_sp, is only a fraction of the background surface electron density, n_s, in momentum space in Fig. <ref>(b), and can be given by
n_sp = k_sp/k_Fn_s
where k_F is the Fermi wave-vector for gold, around 1.21d4. The resonance condition, the match between the wave-vectors of the evanescent wave, k_x, and that of the surface plasmon, k_sp, gives the estimation of k_sp/k_F about 10^-3, since k_sp = k_x≈ 2π/λ≈ 10 for the visible light with the wavelength λ = 0.625.
Eq. <ref> suggests the SPR signal be an indicator to the surface electron density. At the noble-metal surface, the surface electron density follows a two-dimensional electron gas (2DEG) model by <cit.>:
n_s = n_2D = m/πħ^2k_BTln( 1+expE_f/k_BT)
where m is the electron mass, ħ Planck's constant, k_B Boltzmann constant, T the temperature and E_f the Fermi level of the metal.
According to Eqs. <ref> and <ref>, the Fermi level of the metal can modulate the SPR signal. To modulate the Fermi level of a planar SPR film, we use a phase sensitive SPR imaging system combined with an external signal generator which can apply a bias potential to the sensing surface according to the requirements as illustrated in Fig. <ref>(c). The sensing surface of a 48nm gold film is divided into two insulated cell: one as the working cell connected with the signal generator and the other as the reference cell recording the light power fluctuation during the measurement. A region of interest (ROI) is selected in each cell to obtain the signal intensity of the region. In Fig. <ref>(c) , ROI 1 is in the working cell and ROI 2 in the reference cell. We use the differential of the intensities from both regions as the measured signal of which the noise from the light power fluctuation can be depressed.
Fig. <ref>(d) shows an SPR signal under a linear bias potential scanning from -0.51 at the rate of 0.1 in a 30 potassium hydroxide (KOH) electrolyte and the theoretical calculation of n_sp under different Fermi level according to Eqs. <ref> and <ref>. The agreement, especially from -0.50.4, supports our previous prediction that we can use the SPR signal to indicate the surface electron density. The deviation of the signal from the theoretical calculation is because of the oxidation of the gold film when the bias potential exceeds 0.5 in solution. It should be noted that our previous conclusion <cit.>, supported by other studies <cit.>, that the SPR signal is proportional to the applied potential still holds because of ln( 1+expE_f/k_BT)≈E_f/k_BT when E_f >> k_BT. For T=298 , k_BT is 25.7.
Eq. <ref> also implies the ultra sensitivity of SPR to electron density variations at the sensing surface. SPR reduces the background electron density to k_sp/k_F, 10^-3 approximately in our configuration. As an illustration, the background electron density is given by Eq. <ref>, about 4e5□ at E_f=100 and the corresponding n_sp is estimated by 400□. As a result, the signal from one electron increases from e-6e-3□ for SPR measurement.
Three technical factors block off the observation of this ultra electron density sensitivity: (1) the size of the detected spot, (2) the noise, and (3) the condition at which the individual molecules transfer their electrons to the sensing surface. In our prism-based SPR imaging system, the minimum detected spot is the area which a pixel is covering in the sensing surface, 300□ approximately ( 25 × 12 ). The number of the surface plasmon related electrons in the area, ρ, is about 10^5 per pixel and the expected signal from one electron in the area, 1/ρ, is 10^-6 per pixel, which is still difficult to detect. A method is needed to further reduce these surface plasmon background electrons.
An investigation of the surface plasmon electrons in momentum space under an external bias potential paves the way to the reduction in Fig. <ref> (a). On the one hand, as we have pointed out, the surface plasmon occurs near the Fermi surface of the metal. On the other hand, an external potential applied to the metal can modulate the Fermi level of the metal. In momentum space, the Fermi surface under the potential modulation, δ E_f, forms a ring with a certain number of electrons, δ n_s. The estimation of δ n_s in vacuum can be obtained by the derivative of Eq. <ref> with respect to E_f, or at a solid-liquid interface, it can be estimated by
δ n_s = cδ E_f/e
where c is the capacitance density of the interface. Under a surface plasmon condition, this ring also oscillates with k_sp. Therefore, instead of measuring the total surface plasmon electrons, we focus on the electrons in the area of the ring, δ n_sp. According to Eqs. <ref> and <ref>, we have:
δ n_sp = k_sp/k_Fcδ E_f/e
For a gold electrode, the order of the capacitance density is 10□ <cit.> 133and the corresponding electron density is about e4□ approximately when δ E_f= 100. The number of the surface plasmon electrons in response to δ E_f is estimated by 10□. Therefore, under δ E_f, the measured surface plasmon electrons in a pixel, ρ, is estimated by 10^3 and one electron will induce 10^-4 signal variation for 1/ρ.
To measure the signal variation under δ E_f, we can apply a bias potential modulation function to the sensing surface. When excess electrons are introduced to the surface, the SPR signal in response to δ E_f will change. Especially, when the applied potential modulation function satisfies a trigonometric function, we can convert the SPR signal from the time-domain into the frequency-domain by Fourier transform of δ E_f: δn̂_sp= k_sp/k_Fc/eℱ{δ E_f(t)}.
In order to measure electron density variations on the order of 10^-4 per pixel, noise is a second prevention, among which the power fluctuation is dominant. The large field of view of the imaging scheme can allow for the simultaneous recording of reflections from both the working cell and the reference cell. Because the reflections from both regions experience similar light power fluctuations during the measurement as shown in Fig. <ref>(b), the differential signal between the two regions can significantly reduce the effects of these fluctuations. To further improve the signal-to-noise ratio, sinusoidal potential modulations with an AC amplitude of 200 around different DC components at a frequency of 1.1 are introduced to the working cell.
The comparison between the recorded signal in the working cell and the differential signal in Fig. <ref>(c) demonstrates a significant improvement in the signal-to-noise ratio for the latter. The power fluctuations in the recorded signal are measured to be 4× 10^-3, whereas the differential signal fluctuates less than 10^-3. The modulation signal is barely noticeable in the raw signal but can be clearly distinguished in the differential signal. The corresponding amplitude density spectrum in Fig. <ref>(d) provides a comprehensive noise analysis. Prior to the depression scheme, the noise ranges from 10^-3 √() within the frequency band from 101, and approaches 10^-2 √() within the low-frequency band from 1e-2. After the differential, the noise is near the signal level, 10^-4 √(), while the integration of the modulated signal at 1.1 for one minute amplifies the measured signal to above 10^-3 √(), providing the technique with an ability to analyze the electron density variation less than one electron.
In the frequency-domain, we can estimate the SPR signal, I_AC, by
I_AC = KI_0δρ/ρ√(τ f/2)
where K is the constant determined by the optical setup, about 0.85 in our configuration, I_0 the background intensity of the surface, 157 grayscale levels during the experiment, τ integration period and f is sampling frequency of CCD, optimized at 10Hz.
According to Eq. <ref>, both electron density variations and corresponding integral time determine frequency-domain signals. For typical electron density variations from 10^-5 per pixel per minute to 10^-2 per pixel per minute, the frequency-domain signals change linearly from 0.1 grayscale levels to 100 grayscale levels in Fig. <ref>(a). For one electron variation in a pixel fitted in our configuration, 1/2500 per pixel approximately, the frequency-domain signal during one-minute reaches 4 grayscale levels. In Fig. <ref>(b), the signal approaches to 4 grayscale levels by square root of the integral time, √(τ).
The condition at which the individual molecules transfer their electrons to the sensing surface is determined by the DC components of the potential modulations which set the Fermi level basis of the surface. The individual molecules transferring their electrons to the sensing surface occurs only when the frontier molecular orbital levels of the molecule, HOMO and LUMO, are aligned with the Fermi level of the surface as illustrated in Fig. <ref>(a) <cit.>. This energy level alignment changes the density of states (DOS) of the surface significantly around HOMO/ LUMO levels and subtly among other levels after the adsorption of an individual molecule. As a result, the electron density of the surface will be changed because of the alignment.
To demonstrate the sensitivity of this ELA-SPRi imaging method, potassium chloride (KCl) and potassium sulfate (K2SO4) have been diluted by 30 KOH seperately to provide 1 chloride anions (Cl^-), and sulfate anions (SO_4^2-). Both can be adsorbed on the gold surface <cit.>. In Fig. <ref>(b), (c) and (d), we have measured the adsorption and desorption of both anions at three different bias potential modulations. Although it is difficult to detect the processes under the modulation around 50, we can distinguish the adsorption and desorption processes under the modulations around 250 and 450. For the Fermi level of the surface around 50, the adsorption between the anions and the surface are physcial adsorption where molecules or atoms adhere to a surface through weak intermolecular forces, such as van der Waals forces or dipole-dipole interactions. However, when the Fermi level is aligned with the the frontier molecular orbital levels of the anions, fractional electrons transfer between the molecule and the surface and the chemical adsorptions occur. For the alignment around 250 near the HOMO levels of both anions, the electrons transfer to the surface and the adsorption signals decrease while for the alignment around the LUMO levels, 450, the electrons transferring from the surface to the molecule increases the adsorption signals. It is interesting to notice that the opposite direction of the signal variations suggests the opposite electron transfer direction.
It is also noticeable that traditionally we take chemical adsorptions irreversible. However, when the reactions occurs at single-molecule level, we observe the reversibility of the chemical adsorptions. When we wash the working cell with the electrolyte solution, 30 KOH, at 1300 and 2500, the return of the signals in Fig. <ref>(c) and (d) indicates the desorption of the anions. Both anions introduce the similar intensity variations, about 0.4 grayscale levels, demonstrating about 0.1 e be introduced to the ROI on average. Thus, we can adapt the conventional Four-layer model to a three-layer model, glass substrate/ plasmonic metal film/ buffer medium, when the concentration of the detected molecules comes down to a single-molecule level. Instead of the molecular film thickness variation, the sparsely adsorbate molecules change the dielectric function of the plasmonic metal when the Fermi level is modulated around the frontier molecular orbital levels of the molecules.
In conclusion, we have demonstrated the ultra electron density sensitivity for surface plasmons by investigating surface plasmon involved electrons from a solid-state perspective. The plasmonic nature can reduce the background electron density to the order of 10□ and be used to detect electron variations about 0.1 e. Based on this principle, we have developed an ELA-SPRi technique to measure the adsorption and desorption processes of individual anions at a planar gold electrode. This optical method, bypassing the molecular size restriction, provides a strategy to observing the electronic states of the individual molecules on a relative large structure and we plan to develop a single-molecule locating imaging technique based on the method. Furthermore, considering its compatibility with the conventional SPR-based technology and less limitations of the complicated sensing unit preparation, we believe ELA-SPRi will not only facilitate sensing in biophysical applications <cit.> but also be capable of analyzing individual chemical reactions in the field of catalysis engineering and energy <cit.>.
W. L. is grateful to Dr. Sixing Xu for his insightful suggestion that the detection response be from the electron density variation at the very beginning of this work and Miss Yu Miao for her constant urging the author to polish this work. This work was financed by National Key R&D Program of China, Director Fund of Institute of Mechanics, and Fundamental Research Funds for the Central Universities.
|
http://arxiv.org/abs/2307.07559v1 | 20230714180410 | Spectral energy distributions of classical cepheids in the Magellanic Clouds | [
"Martin Groenewegen",
"Jan Lub"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA"
] |
Koninklijke Sterrenwacht van België, Ringlaan 3, B–1180 Brussels, Belgium
[email protected]
Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA Leiden, The Netherlands
Martin Groenewegen
In this study, we constructed spectral energy distributions (SEDs) fot a sample of 142 Large Magellanic Cloud (LMC)
and 77 Small Magellanic Cloud (SMC) fundamental-mode classical Cepheids (CCs) using
photometric data from the literature. When possible, the data were taken to be representative of mean light
or averaged over the light curve.
The sample was built from stars that either have a metallicity determination from high-resolution spectroscopy
or have been used in Baade-Wesselink types of analyses, or have a radial velocity curve published in Gaia DR3
or have Walraven photometry,
or have their light- and radial-velocity curves modelled by pulsation codes.
The SEDs were fitted with stellar photosphere models to derive the best-fitting luminosity and effective temperature.
Distance and reddening were taken from the literature.
Only one star with a significant infrared (IR) excess was found in the LMC and none in the SMC.IR excess in MW CCs is not uncommon
suggesting that IR excess may be more prominent in MW cepheids than in the Magellanic Clouds.
The stars were plotted in a Hertzsprung-Russell diagram (HRD) and compared
to evolutionary tracks for CCs and to theoretical instability strips.
For the large majority of stars, the position in the HRD is consistent with the instability strip.
Period-luminosity (PL) and period-radius relations were derived and compared to these relations in the MW.
For a fixed slope, the zero point of the bolometric PL relation does not depend on metallicity, contrary to recent findings of a significant
metallicity term when considering the PL relation in different photometric bands.
The mass-luminosity (ML) relation is derived and it points to an over luminosity of about +0.3 dex with respect to a canonical ML relation.
The most intriguing result concerns the flux-weighted gravity (FWG, a quantity derived from gravity and effective temperature) and
its relation to period and luminosity. Both relations agree with theory, with the results for the MW and with the independent estimates from
the six known LMC eclipsing binaries that contain CCs.
However, the FWG (as determined from dedicated high-resolution spectroscopy for the sample) is too low by about 0.8 dex in 90% of the cases.
Recent works on time-series data on 20 CCs in the MW were analysed finding a similar (but less extreme) offset in gravity
and the FWG.
Most importantly, other time-series data on the same 20 CCs are in full agreement with the FWG-period relation.
The observed time-series of spectroscopic data and from a two-dimensional hydrodynamical cepheid model was used to investigate the
so-called effective gravity, that is, the gravity corrected for a dynamical term related to the time derivative of the radial velocity.
There is a reasonable good correspondence between the predicted effective gravity and the observed gravity as a function of pulsation phase,
which would potentially allow for an independent estimate of the projection factor, but the dynamical term is too small to explain the
overall difference between the observed (flux-weighted) gravity and the (flux-weighted) gravity derived from the SED modelling and stellar mass estimates.
Spectral energy distributions of classical cepheids in the Magellanic Clouds
Tables <ref>, <ref>, <ref>, and <ref> are available in electronic form at the CDS via
anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/.
The fits to the SEDs are available at <https://doi.org/10.5281/zenodo.8032168>.
M. A. T. Groenewegen1
J. Lub2
received: ** 2023, accepted: * 2023
========================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Classical cepheids (CCs) serve as an important standard candle because they are bright and provide a link
between the distance scale in the nearby Universe and that further out, via galaxies that contain both Cepheids and SNIa
(see for a determination of the Hubble constant to 1.0 precision).
Typically, the period-luminosity (PL) relations of CCs that are at the core of the distance determinations are derived, namely:
the photometric filters (V, I, K) or combination of filters that are designed to be
independent of reddening, the so-called Wesenheit functions <cit.>; for example, using combinations of (V,I), (J,K)
or the combination used by the SH0ES team (F555W, F814W, and F160W HST filters, see ).
On the other hand, the bolometric magnitude or luminosity is a fundamental quantity of CCs as well as stars in general,
as it is the output of stellar evolution models and the input to CC pulsation models.
In <cit.> (hereafter G20) the spectral energy distributions (SEDs) of 477 Galactic CCs were constructed and
fitted with model atmospheres (and a dust component when required). For an adopted distance and reddening these fits resulted in a
best-fitting bolometric luminosity (L) and the photometrically derived effective temperature (T_ eff).
This allowed for the derivation of
period-radius (PR) and PL relations, the construction of the Hertzsprung-Russell diagram (HRD), and a comparison to
theoretical instability strips (ISs). The position of most stars in the HRD was consistent with theoretical predictions.
Outliers were often associated with sources where the spectroscopically and photometrically determined
effective temperatures differed, or in cases where the sources exhibit a high degree of reddening with large uncertainties as well.
This sample was further studied in <cit.>, where the relation between
bolometric absolute magnitude and the flux-weighted gravity (FWG),
defined as
log g_ F = log g - 4 ·log(T_ eff / 10^4) <cit.>,
was investigated: the so-called flux-weighted gravity-luminosity relation (FWGLR).
The tight correlation between g_ F and luminosity was first
demonstrated by <cit.> for blue supergiants (BSGs) and was later used for extra galactic distance determinations in <cit.>.
<cit.> then demonstrated that theoretical pulsation models for CCs also followed a tight FWGLR, one that is,
in fact, tighter than the PL relation, finding that there was a good correspondence between observed g_ F and period for a sample of CCs.
<cit.> presented the currently best observationally determined FWGLR for Milky Way (MW) CCs, based on the luminosities
derived in <cit.> and gravity and effective temperatures from the literature.
In <cit.> the adopted distances were based as much as possible on Gaia parallax data (from DR2 in that case).
However, we need to correct the catalogued parallaxes for the parallax zero-point offset (PZPO).
In GDR2, this value was -0.029 mas for quasars <cit.> and for CCs -0.046 ± 0.013 <cit.> or
-0.049 ± 0.018 mas (, hereafter G18), and was also a limiting factor in improving upon the local distance scale.
In GEDR3 the Gaia team provided a Python script to the community (, hereafter L21), which
<cit.> applied to a sample of 75 CCs in the MW, concluding that a counter correction of -14 ± 6 is required.
The advantage of using the Magellanic Clouds (MCs) is that accurate and independently derived mean distances are available based on
the analysis of samples of eclipsing binaries <cit.>
The present paper performs a study similar to <cit.> and <cit.> for a sample of CCs in the Small (SMC) and
Large (LMC) Magellanic Clouds.
The paper is structured as follows.
In Section <ref>, the sample of 219 MC CCs is introduced, while
Section <ref> introduces the photometry that is used, the distances used, how the stellar mass is estimated and how the
modelling of the SED is done.
Section <ref> offers a discussion of key results, in particular, the location of the CCs in the Hertzsprung-Russell diagram, the
presence of any infrared excess, the period-radius and period-luminosity relations, the mass-luminosity relation, and the relation
between the FWG and period and luminosity.
A brief discussion and summary is given in Section <ref>.
§ SAMPLE
In this paper, we study a sample of 142 LMC and 77 SMC CCs.
Although this is a small subset of the about 4700 LMC and 4900 SMC CCs known in the MCs (see e.g. ),
the stars in this sample are of special interest as they have been studied in other respects.
Specifically, the sample is composed of:
* 89 LMC CCs for which <cit.> derived iron and oxygen abundances
(as well as effective temperatures and gravities) from high-resolution (HR) spectroscopy.
This sample is composed of 68 CCs used to define the PL-relation in the LMC in the SH0ES program <cit.> and
21 for which archival spectra, first presented in <cit.>, were re-analysed.
* 14 SMC CCs for which <cit.> performed an abundance analysis.
We note that for the LMC CCs in overlap <cit.> derived an iron abundance that was (on average) 0.1 dex smaller
compared to <cit.>.
* 7 CCs in the LMC cluster NGC 1866 <cit.> and four field SMC objects <cit.> with
iron abundances from HR spectroscopy.
* CCs for which a Baade-Wesselink analysis has been carried out, in particular:
36 LMC and five SMC stars considered in <cit.>, and the almost identical sample of
36 LMC and six SMC stars analysed in <cit.>.
Similarly, 27 LMC and eight SMC stars that have been analysed with the SPIPS code <cit.> in <cit.> to
derive the pulsation (p) factor.
* CCs for which light-curves (and sometimes radial-velocity curves) have been fitted with theoretical pulsation models.
In such a modelling <cit.> the stellar mass, luminosity, and (mean) effective temperature are derived by
fitting the light-curves
(typical V, I, and K). The apparent distance moduli (DM) are derived from which the true DM and reddening are found.
If RV curves are fitted, the projection-factor (p-factor) is also derived.
Here, we consider the 11 LMC and 9 SMC fundamental-mode (FM) CCs studied in <cit.> and <cit.>, respectively.
* CCs with (previously unpublished) photometry in the Walraven <cit.> system.
This system is very usefull in constraining effective temperature and reddening as the photometric bands extend into the blue.
<cit.> published VBLUW photometry for 21 SMC and 20 LMC CCs using data taken between 1971 and 1978 in South Africa
(also see ).
However, data collection continued from 1979 onwards from Chile and in Appendix <ref> we report on these observations.
* CCs in the MCs with RV curves published in DR3 <cit.>.
* SMC FM CCs for which UVES spectra and in part HST photometry will be obtained in the near
future[See the publically available information on ESO program 0109.D-0846(A) (P.I. M. Romaniello) as
per October 1st, 2022 and <https://www.stsci.edu/hst/phase2-public/17097.pro> (P.I. A. Riess), respectively.].
There are stars contained in the overlap between the different subsamples and the final sample
consists of 142 LMC and 77 SMC CCs, all of which are FM pulsators.
The basic parameters of the stars are compiled in Table <ref>.
Sample of stars (selected entries)
Identifier HV Period d A_ V T_ eff log g Ref Luminosity T_ eff R Mass log g log g_ F χ^2_ r
(d) (kpc) (mag) (K) (cgs) () (K) () () (cgs) (cgs)
LMC0046 12717 8.844 50.62 0.39 5224 ± 134 2.45 ± 0.19 (5) 3777.1 ± 88.3 5750 ± 125 62.0 ± 2.7 5.50 ± 0.22 1.59 ± 0.04 2.55 ± 0.05 9.5
LMC0079 -1 22.544 50.52 0.36 6150 ± 97 1.50 ± 0.10 (4) 7823.3 ± 102.0 5125 ± 88 112.4 ± 3.8 6.58 ± 0.44 1.16 ± 0.04 2.32 ± 0.05 6.1
LMC0107 12452 8.739 50.48 0.43 5390 ± 42 0.80 ± 0.07 (1) 3834.1 ± 91.3 5750 ± 125 62.5 ± 2.7 5.68 ± 0.25 1.60 ± 0.05 2.57 ± 0.06 6.7
LMC0328 873 34.449 50.75 0.24 5222 ± 27 1.25 ± 0.06 (5) 17626.4 ± 335.4 5250 ± 144 160.8 ± 8.6 8.80 ± 0.40 0.97 ± 0.05 2.09 ± 0.07 7.0
LMC0367 872 29.822 49.79 0.24 5675 ± 120 (5) 9577.3 ± 137.1 5000 ± 125 130.6 ± 6.4 6.38 ± 0.56 1.01 ± 0.06 2.22 ± 0.07 16.7
LMC0434 875 30.349 50.25 0.26 5660 ± 100 0.30 ± 0.13 (2) 17254.3 ± 617.9 5625 ± 144 138.6 ± 7.3 7.22 ± 0.36 1.02 ± 0.05 2.02 ± 0.07 5.2
LMC0461 877 45.166 49.86 0.26 4890 ± 109 0.70 ± 0.08 (1) 14891.6 ± 257.9 4750 ± 125 180.5 ± 9.3 7.85 ± 0.70 0.82 ± 0.06 2.12 ± 0.08 13.6
LMC0467 876 22.720 49.70 0.23 5391 ± 74 1.63 ± 0.16 (5) 9500.7 ± 251.0 5375 ± 125 112.6 ± 5.3 6.54 ± 0.22 1.16 ± 0.05 2.23 ± 0.06 4.4
LMC0501 878 23.311 50.37 0.24 5130 ± 77 0.30 ± 0.05 (2) 10708.0 ± 279.0 5500 ± 125 114.2 ± 5.2 6.62 ± 0.12 1.14 ± 0.04 2.18 ± 0.05 4.6
LMC0504 12505 14.393 50.05 0.33 3867.4 ± 132.1 5000 ± 189 83.0 ± 6.1 5.63 ± 0.32 1.35 ± 0.07 2.55 ± 0.09 20.6
LMC0510 879 36.831 50.22 0.34 5530 ± 64 0.10 ± 0.15 (2) 13853.0 ± 335.2 4875 ± 153 165.3 ± 10.1 8.48 ± 0.53 0.93 ± 0.06 2.18 ± 0.08 33.5
LMC0512 2257 39.398 50.36 0.22 5200 ± 79 0.00 ± 0.09 (2) 17977.8 ± 152.2 5125 ± 88 170.4 ± 5.8 8.40 ± 0.48 0.90 ± 0.04 2.07 ± 0.05 6.2
LMC0528 881 35.731 50.47 0.15 5200 ± 64 0.10 ± 0.08 (2) 15023.5 ± 494.6 5125 ± 189 155.7 ± 11.2 7.70 ± 0.46 0.95 ± 0.07 2.11 ± 0.10 21.2
LMC0545 2262 15.832 50.38 0.25 5420 ± 85 0.80 ± 0.05 (2) 5898.8 ± 124.2 5250 ± 125 93.0 ± 4.4 6.52 ± 0.38 1.31 ± 0.05 2.43 ± 0.06 7.3
LMC0546 2249 15.216 49.64 0.21 6730 ± 285 1.40 ± 0.29 (5) 6215.9 ± 160.9 5500 ± 189 87.0 ± 5.8 5.99 ± 0.25 1.34 ± 0.06 2.38 ± 0.08 16.2
LMC0561 880 11.670 49.73 0.22 5383 ± 202 2.18 ± 0.26 (5) 4836.7 ± 88.8 5875 ± 168 67.2 ± 3.7 4.72 ± 0.15 1.45 ± 0.05 2.38 ± 0.07 7.1
LMC0590 882 31.787 50.18 0.31 5880 ± 322 0.00 ± 0.10 (2) 14075.5 ± 307.3 5250 ± 153 143.7 ± 8.2 7.48 ± 0.29 1.00 ± 0.06 2.12 ± 0.08 16.6
LMC0594 -1 6.733 50.19 0.27 5520 ± 194 0.90 ± 0.05 (2) 2245.8 ± 44.7 5625 ± 189 50.0 ± 3.2 4.90 ± 0.17 1.73 ± 0.06 2.73 ± 0.08 17.2
LMC0619 883 133.779 49.89 0.28 4754 ± 11 1.65 ± 0.02 (5) 48057.0 ± 2103.3 4625 ± 208 342.0 ± 29.8 (8.03 ± 0.77) (0.27 ± 0.08) (1.61 ± 0.11) 86.4
LMC0648 2270 13.626 50.17 0.28 5300 ± 90 0.50 ± 0.07 (2) 4192.2 ± 109.1 5250 ± 125 78.4 ± 3.7 5.53 ± 0.31 1.39 ± 0.05 2.51 ± 0.06 7.4
⋯
SMC3533 1950 7.990 62.44 0.13 5900 ± 100 1.64 ± 0.10 (6) 3226.2 ± 123.4 5375 ± 189 65.6 ± 4.6 6.86 ± 0.89 1.65 ± 0.08 2.72 ± 0.10 24.8
SMC3565 1954 16.694 62.44 0.15 5890 ± 100 1.00 ± 0.10 (3) 9712.3 ± 285.1 5500 ± 153 108.7 ± 6.0 8.97 ± 0.66 1.32 ± 0.06 2.36 ± 0.08 13.8
SMC3588 1957 5.319 62.44 0.12 5975 ± 100 1.84 ± 0.10 (6) 1964.1 ± 75.0 5750 ± 189 44.7 ± 2.9 4.85 ± 0.27 1.83 ± 0.06 2.79 ± 0.08 16.0
SMC3611 1956 208.799 62.44 0.10 5677 ± 63 1.94 ± 0.19 (5) 69071.1 ± 2229.5 4375 ± 168 458.2 ± 34.0 (9.60 ± 0.79) (0.10 ± 0.07) (1.54 ± 0.10) 37.4
SMC4555 2209 22.642 62.44 0.17 6130 ± 100 1.10 ± 0.10 (3) 14112.7 ± 430.0 5625 ± 153 125.3 ± 6.8 8.64 ± 0.08 1.18 ± 0.04 2.18 ± 0.06 18.0
SMC4697 817 18.901 62.44 0.12 5850 ± 100 1.00 ± 0.10 (3) 9953.4 ± 533.2 5500 ± 237 110.1 ± 9.4 7.91 ± 0.35 1.26 ± 0.08 2.30 ± 0.11 59.7
SMC4919 6357 33.338 62.44 0.18 6130 ± 100 0.50 ± 0.10 (3) 21581.7 ± 924.8 5250 ± 204 177.9 ± 13.6 11.61 ± 0.24 1.00 ± 0.07 2.12 ± 0.09 13.5
SMC4953 11211 21.386 62.44 0.09 4830 ± 100 0.00 ± 0.10 (3) 11665.8 ± 387.5 5250 ± 153 130.8 ± 7.6 10.08 ± 0.59 1.21 ± 0.06 2.33 ± 0.08 15.8
SMC4976 2231 36.737 62.44 0.17 16641.3 ± 1921.0 5125 ± 312 163.9 ± 20.5 8.38 ± 0.55 0.93 ± 0.11 2.09 ± 0.15 47.9
Column 1. The identifier used in this paper, which is related to the identifier used by OGLE.
The first entry, LMC0046, for example, would be known as OGLE-LMC-CEP-0046.
Column 2. Harvard variable (HV) identifier, when available.
Column 3. Pulsation period.
Column 4. The adopted distance.
For the LMC objects this includes the geometric correction, see Sect. <ref>.
Column 5. The adopted reddening value A_ V based on <cit.>, see main text.
Column 6. Effective temperature in the literature. For references 3 and 6 an uncertainty of 100 K has been adopted.
Column 7. log gravity in the literature. For references 3, 4 and 6 an uncertainty of 0.1 dex has been adopted.
Column 8. Reference for the data in Columns 6 and 7,
(1) <cit.>, re-analysed data from <cit.> (Tables 4 and 6),
(2) <cit.> (new spectra, Tables 3 and 5),
(3) <cit.>,
(4) <cit.>,
(5) <cit.>.
(6) <cit.> (LMC) and <cit.> (SMC).
For reference (6) the values are not determined from high-resolution spectroscopy but from their best-fitting LC fitting models.
Column 10. Luminosity with error bar from the SED fitting.
Column 11. (photometric) effective temperature with error bar from the SED fitting.
Column 12. Radius with error bar, derived from L and T_ eff.
Column 13. Adopted stellar mass, see Appendix <ref>.
Column 14. log gravity determined from mass and radius.
Column 13. Flux-weighted gravity calculated from from the gravity and T_ eff.
For a few stars the mass estimate is clearly too low given their period and the value for the mass
and (flux-weighted) gravity are not used and listed between parenthesis.
Column 14. The reduced chi-square of the fit to the SED.
§ PHOTOMETRY, DISTANCE, MASSES, AND MODELLING
§.§ Photometry
The SEDs were constructed using photometry retrieved mainly (but not exclusively)
via the VizieR web-interface[<http://vizier.u-strasbg.fr/viz-bin/VizieR>].
Data were considered (in increasing wavelength) in the UV
from GALEX <cit.>,
in the optical from a variety of sources, namely,
OGLE (B, V,I) photometry from <cit.> (SMC) and <cit.> (LMC),
(V,I) photometry from the OGLE Shallow Survey in the LMC <cit.>, and OGLE-IV data
for both Clouds <cit.>,
Gaia B_ p, G, and R_ p photometry from DR3 <cit.>,
(B,R) photometry from the EROS survey in the LMC <cit.>,
(u,v,g,r,i,z) data from Skymapper DR2 <cit.>,
(B,V,R,I) photometry for LMC CCs from <cit.>,
(U,B,V) photometry from <cit.>, <cit.>, and <cit.>,
Walraven (VBLUW) photometrey (see Appendix <ref>),
(U,B,V,I,K) for CCs in NGC 1866 from <cit.>,
HST F555W, F814W, and F160W photometry from <cit.> for LMC cepheids, and further to the near- and mid-infrared,
(Y,J,K) photometry from the VMC survey <cit.> for the SMC <cit.> and LMC <cit.>, and
from the public DR6 for a few remaining stars,
(J,H,K) photometry for LMC CCs from <cit.> and <cit.>,
Akari photometry for the SMC <cit.> and LMC <cit.>.
We note that for the S7, S11, and L15 filters, only errors in the magnitudes of
<0.15, <0.20, and <0.20 mag, respectively, were accepted.
Then, we also used AllWISE photometry (; in the W3 and W4 filters only errors in the magnitudes of
<0.30, and <0.25 mag, respectively, were accepted),
Spitzer IRAC photometry from <cit.> (mean magnitudes from template fitting in the 3.6 and 4.5 μm bands) and
from VizieR catalogue II/305/catalog in the 5.8 and 8.5 μm bands. Finally, we used
MIPS photometry at 24 μm available from the IRSA[<https://irsa.ipac.caltech.edu/applications/Gator/>,
"SAGE MIPS 24 um Epoch 1 and Epoch 2 Full List."]. No MIPS data at 70 μm were available.
The number of available photometric data points for the LMC CCs ranges from 16 to 39, with a median of 30,
and for the SMC objects from 15 to 29, with a median of 20 photometric points.
The data contain single-epoch observations (typically from GALEX and Akari) but whenever possible values
at mean light were taken or multiple datapoints were averaged.
§.§ Distance and geometric correction
The mean distance to the LMC is adopted to be d_ LMC= 49.59 ± 0.09 (stat)± 0.54 (syst) kpc <cit.>),
and to the SMC of d_ SMC= 62.44 ± 0.47 (stat.)± 0.81 (syst.) kpc <cit.>, based, in both cases, on
the analysis of samples of eclipsing binaries.
The depth effect in the SMC is considerable, for example <cit.>, but all SMC sources have been adopted to be at the mean distance.
For the LMC the first order approximation of an inclined disk is adopted to compute the geometric correction and the procedure
in <cit.> is followed, taking the inclination
and position angle of the line of nodes
of the disk from <cit.>
and the LMC center-of-mass coordinates from <cit.>.
The adopted distances are listed in column 4 in Table <ref>.
§.§ Stellar masses
An estimate for the stellar mass is required when computing the stellar gravity, with the stellar radius available from the
best-fitting luminosity and effective temperature combination, as detailed below.
Several methods have been employed
* the period-luminosity-mass-effective temperature-metallicity relation derived in <cit.> based on the
models of <cit.>.
* Similarly, such a relation was derived here based on the models of <cit.>.
That paper gives, for a given mass, metallicity and rotational speed – the period, luminosity, and effective temperature
at the start and end of the IS, and for different crossings of the IS a star may undergo.
As the first-crossing is very short compared to the other crossings these models were not considered.
An initial fit showed that the metallicity term is not significant.
The best linear fit for average rotation, all metallicities, and the second and third crossing models for FM pulsators is:
log P = (13.095 ± 0.114) +(0.857 ± 0.010) log L
-(0.669 ± 0.032) log M -(3.912 ± 0.030) log T_ eff
(N= 46, σ = 0.0076)
*
Based on the model fitting of LMC Cepheid light curves <cit.> presented a period-mass-radius relation of
log P = (-1.618 ± 0.007) + (-0.68 ± 0.02) log M
+ (1.72 ± 0.01) log R
with a dispersion of only 0.005 dex. This relation will be applied to the SMC ones as well.
*
<cit.> analysed the light- and radial-velocity curves of the six known cepheid containing
eclipsing binaries (containing seven cepheids, including one type-II cepheid (T2C), all in the LMC).
For convenience, the stellar parameters they derived for the
cepheids are compiled in Table <ref>, as they appear in several plots.
<cit.> derived the following period-mass-radius relation
log P = (-1.555 ± 0.035) - (0.795 ± 0.044) log M
+ (1.703 ± 0.023) log R
with a dispersion of 0.037 dex. This relation will be applied to the SMC ones as well.
The mass range on which this relation is based is smaller than that of the other relations.
*
<cit.> used nonlinear convective pulsation models to link period and mass to a
Wesenheit index based on Gaia magnitudes.
For a mixing length parameter of 1.7 and FM pulsators <cit.> give:
WG - DM = -1.686 -2.496 log P -2.285 log M
with a dispersion of 0.058 mag, and where WG = G - 1.90 · (B_ p - R_ p) <cit.>
and DM is the distance modulus. The pulsation models were calculated at solar metallicity but will be used for the MC CCs.
The mass estimates from these five methods are listed in Table <ref>.
The adopted mass is the median among the five estimates, and is listed in Table <ref> and in column 12 in Table <ref>.
To estimate the error bar, the error in the mass estimate of the median value is added in quadrature to
the median-absolute-deviation times 1.48 (to get the equivalent of one sigma in a Gaussian distribution)
among the five estimates.
The different estimates are in good agreement in most cases, except for some of the longest period cepheids where
some individual estimates give unrealistically low masses, leading to the median value becoming also unrealistically low.
These values are set between parenthesis and have not been used in the analysis.
The origin of the discrepancy is probably related to the fact that the different mass-estimate formalisms are not derived from such long-period cepheids.
§.§ Modelling
The SEDs are fitted with the code
More of DUSTY (MoD, <cit.>)[<http://homepage.oma.be/marting/codes.html>]
which uses a slightly updated and modified version of the DUSTY dust radiative transfer (RT) code <cit.> as
a subroutine within a minimisation code. As we are not interested in any dust component
the dust optical depth is initially set to zero.
In that case, the input to the model are the distance, reddening, and a model atmosphere.
The few cases where an infrared (IR) excess may be present are discussed in Sect. <ref>.
The model atmosphere fluxes are reddened to be compared to the observations.
The reddening map of <cit.> for the MCs is adopted and the E(V-I) value in the map closest to the source is taken.
The visual extinction is then assumed to be A_ V= 3.1 · E(V-I) /1.318, and is listed in column 5 in Table <ref>.
MARCS model atmospheres are used as input <cit.> for log g= 1.5 and metallicity -0.50 and -0.75 dex
for the LMC and SMC stars, respectively.
The model grid is available at 250 K intervals for the effective temperature
range of interest, and adjacent model atmospheres are used to interpolate models at 125 K intervals,
which better reflects the accuracy in T_ eff that can be achieved.
For every model atmosphere (that is, T_ eff) a best-fitting luminosity (with its [internal] error bar, based on the covariance
matrix) is derived with the corresponding reduced χ^2 (χ_ r^2) of the fit.
The model with the lowest χ_ r^2 then gives the best-fitting effective temperature.
Considering models within a certain range above this minimum χ_ r^2 then gives the estimated error in the
effective temperature and luminosity. For the luminosity, this error is added in quadrature to the internal error in luminosity.
The best fitting effective temperature and luminosity with error bar are listed in columns 9 and 10, and the resulting radius in column 11
of Table <ref>. Combined with the mass this gives the gravity (column 13), and gravity combined with T_ eff the FWG (column 14).
§ RESULTS AND DISCUSSION
§.§ General
Figure <ref> and <ref> present the four best, respectively, the four poorest fits to the SEDs (according to the
χ_ r^2), respectively, with the residual (model minus observations) in the bottom part
of each panel[The complete set of SEDs is are available at <https://doi.org/10.5281/zenodo.8032168>].
In the model fitting procedure photometric outliers were excluded in the following way.
The rms in the residuals was determined and added in quadrature to the photometric error bar for each data point.
If the absolute difference between model and observations was larger than 4σ the point is flagged and
plotted with an error bar of 3.0 mag to still identify it but to have no influence on the fitting.
The fits are quite acceptable. In the poorest fits the scatter among the various photometric points is larger overall.
In the case of SMC0417 and SMC0921 this leads to the result that the most visually discrepant points (the VMC JHK points)
are not marked as 4σ outliers and that therefore the reduced χ^2 is large.
A limitation of our procedure is that time variability of the photometry is not taken into account.
Values at mean light have been considered whenever possible, but the SEDs also contain single-epoch data.
Pulsation amplitudes decrease with wavelength so the effect should not play a major role in the mid and far-IR and in the
NIR where the SEDs peak mean-light magnitudes are typically available.
The construction of the SED at mean light also ignores possible phase shifts between photometric bands.
We have compared our results to the fully independent modelling by <cit.> using the SPIPS code <cit.> for
35 stars in overlap. The SPIPS code takes light curves as input and therefore considers the time variability.
It is also independent in the sense that it fits ATLAS9 model atmospheres <cit.>.
Figure <ref> compares the result and the agreement is excellent. The rms in the residuals is about 160 K in
T_ eff and 0.05 dex in log L. The effective temperature plot suggest that the errors in effective temperature
may have been overestimated by about ∼ 40% in both studies.
§.§ Hertzsprung-Russell diagram
Figure <ref> shows the HRD together with sets of evolutionary tracks and ISs.
Objects from the sample are plotted as black (LMC) and red (SMC) open squares, respectively.
Stars located outside the bulk of objects are plotted with error bars, and some are labelled as well.
Blue symbols with error bars indicate the six CCs in EBs (three FM as filled squares, three FO as filled triangles; see Table <ref>).
Two sets of ISs from <cit.> (at brighter magnitudes) and from <cit.> (at fainter magnitudes) are plotted.
The near horizontal green lines indicate the evolutionary tracks for Z = 0.006 and average initial rotation rate
ω_ ini = 0.5 from <cit.>. Increasing in luminosity are tracks for initial mass
(number of the crossing through the IS): 4 (1), 5 (1), 7 (1), 7 (2), 7 (3), 9 (1), 9 (2), 9 (3), 12 (1), and 15 (1).
The density of stars in the HRD is qualitatively consistent with the fact that the first crossing is much faster than the second and third crossings.
There are only two clear outliers, LMC1940 and LMC1945, and their position in the HRD remains unexplained.
The former object has the fourth poorest fit, but the χ^2_ r of the fit of LMC1945 is not extremely poor.
LMC1940 and LMC1945 have large astrometric_gof_al (GoF) parameters of about 12 and 9.9, respectively
(and RUWE values of 1.56 and 1.43, respectively), which may hint to binarity.
However, 38 stars in the sample have a larger GoF than 9.9 and are not outliers in the HRD diagram.
A cautionary note is that the SED fitting assumes the stars to be single.
Contamination of the photometric points by a companion will have an influence on the results of the fitting procedure.
No spectroscopic temperature determinations is available for LMC1945.
For LMC1940 there is a value of 4909 ± 126 K determined from APOGEE data <cit.>.
A model with 4875 K (the closest in the available grid) was run to find that the luminosity is about 13% less than in the best-fitting model.
This temperature and luminosity would put the star closer to the red edge of the IS, but still too cool and
overly under-luminous, compared to expectations.
The location of the known CCs in EBs in noteworthy. Five of them are clearly hotter than expected from the IS by <cit.>
but are consistent with the IS as calculated by <cit.>.
It is noted that the effective temperatures in <cit.> have not been derived from the available (disentangled) spectra
that these authors used to obtain the radial velocities, but from effective temperature–colour relations using
the V-I (sometimes V-K) colour of the two components, as derived from the modelling of the light curve.
This may have possibly introduced a systematic effect. If the temperatures derived in this way would end up being too high, the luminosities would also
be too large, as indeed inferred from the PL-relation (see below).
§.§ Infrared excess
The default assumption in the modelling has been that there is no IR excess and the SEDs can be modelled by a stellar atmosphere.
However, near- and mid-IR excess is known to exist in Galactic CCs, for example, direct interferometric observations in the optical or NIR
(e.g. ),
modelling with the SPIPS code (e.g. , and for the LMC)
and was also found in modelling the SEDs of Galactic CCs (, G20).
Visual inspection of the SEDs showed five cases where an IR excess could explain the shape of the SEDs[Cases where the excess consisted
only in a single point, typically in the WISE3 filter are not discussed here. They are probably related to contamination in the larger W3 (and W4) beams.
An a-priori selection on photometric error in the W3 and W4 filters was applied (Sect. <ref>) but this did not
remove all likely unreliable points.
The stars discussed in the main text appear to have IR excess in multiple filters.].
Following G20, models were run for these five stars including a dust component and additionally fitting for the dust temperature at the inner radius
and the dust optical depth under the assumption of spherical symmetry. Following G20, a mixture of 3% silicate, 3% aluminium oxide and 94% iron dust
was adopted. The analysis of the MW CCs in G20 with available mid-IR spectra showed that these are near featureless requiring a large fraction of
featureless iron dust, although the nature of the excess is in fact unclear.
The results of the fitting are listed in Table <ref> that include the magnitudes in various filters for the best-fitting model excluding and including
a dust component.
Only the model for LMC0619 is convincing with an excess in four to five filters (see Figure <ref>) and an SED comparable in shape to the
SEDs of the MW CCs with IR excess.
In the other four cases, the temperature at the inner radius is very low and based on two filters only (see Figure <ref>).
Also, the reduction in the Bayesian information criteria (BIC) is small.
Interestingly, the best-fitting model with dust predicts fainter magnitudes for LMC0619. For fixed effective temperature and luminosity and for optically thin
cases, one expects some absorption in the optical and emission in the IR. However, in the fitting the effective temperature and luminosity were
allowed to vary, and the best-fitting luminosity is lower when including dust. The difference in the Wesenheit filters is around 0.15 mag, which is significant.
Only one of 142 CCs in the LMC, and 0/77 in the SMC has a convincing IR excess. Coincidence or not, LMC0619 has the longest period of
the LMC objects (133 days) and one of the highest luminosities.
In the MW, IR excess is quite common (see references at the beginning of the section) and G20 lists 16/347 stars as having an IR excess, also based on SED fitting.
It appears that the presence of IR excess is more common in the MW than in LMC and SMC CCs
For comparison, one of the MW stars with IR excess from G20, LS Pup, was refitted and the results are included in Table <ref> and Figure <ref>.
A definite conclusion would require a more in debt study beyond the scope of the present paper, as one would have to consider the impact of
the bias due to the fact that for MW CCs more data is available (in some cases even mid-IR spectra) especially at longer wavelengths.
The SED modelling is therefore more likely to find an IR excess in MW stars than in the MCs.
§.§ Period-luminosity relations
Figure <ref> shows the PL relation.
A fit to 141 LMC objects (removing one object through 3σ clipping) is:
M_ bol = (-2.96 ± 0.05) log P + (-1.10 ± 0.05)
with an rms of 0.20 mag.
A fit to 77 SMC objects (removing zero outliers) is:
M_ bol = (-3.04 ± 0.11) log P + (-0.99 ± 0.14)
with an rms of 0.25 mag.
LMC1940 is an outlier in the HRD, but also in the PL-relation. Its period would suggest M_ bol≈ -4.0,
or log L ≈ 3.5.
In the HRD, this would move the star up but it would still be an outlier and too cool for its expected position.
Although it is underluminous for a CC LMC1940 is too luminous to be a Type-ii cepheid (T2C), as indicated by the PL-relation
for T2Cs from <cit.>.
We also note that the CCs in EBs are slightly brighter than the mean relation.
In G20 the slope and the zero point (ZP) based on 380 Galactic CCs were derived to be -2.95 ± 0.09
and -0.98 ± 0.07 (rms of 0.40 mag), respectively.
The slopes derived for the three galaxies agree to within the error bar.
Fixing the slope to the most precise one of that in the LMC (-2.96), ZPs of -4.057 ± 0.002 and
-4.046 ± 0.004 are found for LMC and SMC at log P = 1, respectively.
The bolometric PL relation to the Galactic Cepheids presented in G20 did not select based on metallicity.
Figure <ref> shows the distribution in [Fe/H] of the stars that went into that fitting. The 5-95% range
is from -0.31 to +0.29 dex, with a median of +0.06.
Selecting stars in the range 0.0 ≤ [Fe/H] <+0.2 to better have a sample of near solar metallicity leads to a median
of +0.09 (with 0.06 dex dispersion[Calculated as 1.48 times the median absolute deviation.]) and a PL-relation of
M_ bol = (-2.64 ± 0.11) log P + (-1.34 ± 0.09)
with an rms of 0.35 mag using 191 stars, and a ZP of -4.041 ± 0.002 at 10 days for a fixed slope of -2.96.
Figure <ref> shows the ZP at 10 days plotted against metallicity. The metallicity for the MW is the one just derived,
for the LMC -0.409 dex with dispersion of 0.076 <cit.> is adopted and
for the SMC, a value of -0.75 dex with dispersion of 0.08 <cit.> is assumed.
A least-squares fit taking into account the error bars in both axes gives
ZP @ 10 d= (-4.0451 ± 0.0036) + (+0.0082 ± 0.0075) · [Fe/H]
Using slightly different values for the LMC, for example, -0.35 ± 0.09 dex for BSGs <cit.>, or
-0.62 ± 0.14 dex for SMC cluster giants (, as quoted in ) all lead
to slight positive but insignificant slopes.
A constant value of -4.049 mag with dispersion 0.008 mag would fit the ZPs of all three galaxies.
The result that the bolometric PL relation does not seem to depend on metallicity is in contrast with the most recent
results of <cit.> that derive the metallicity term in various filters from Bp band to IRAC 4.5 μm and
find little dependence on wavelength and an average of γ= -0.29 mag/dex.
Our results indicate that the LMC cepheids are indeed brighter than the SMC ones (this is so when γ is negative, and fitting only
the SMC and LMC data points gives γ= -0.04) but it would imply that the ZP for the MW cepheids is too faint
by (-0.75 - +0.08) · -0.29 ≈ 0.24 mag.
Further study is required, especially on the MW sample, and parallaxes from DR4 will be crucial in this regard.
Additionally, the difference between the bolometric magnitude and the magnitude in any photometric band involves a
bolometric correction that should depend on wavelength and makes a direct comparison of the γ terms less evident.
§.§ Period-radius relation
Figure <ref> shows the PR relation.
The relation for the LMC is:
log R = (0.6966 ± 0.0043) log P + (1.1194 ± 0.0052),
with an rms of 0.017 dex and using 138 stars (removing 4 outliers),
and then for the SMC
log R = (0.697 ± 0.013) log P + (1.134 ± 0.017),
with an rms of 0.027 dex and using 76 stars (removing 1 outlier).
Figure <ref> also shows the PR relation for MW CCs from G20.
The slope derived there was 0.721 ± 0.013 with a ZP of 1.083 ± 0.012.
Refitting the data in G20 restricting the metallicity to 0.0 ≤ [Fe/H] <+0.2 gives:
log R = (0.668 ± 0.020) log P + (1.143 ± 0.017),
with an rms of 0.069 dex and using 190 stars.
The slopes are consistent with each other and at P= 10 d the radii in MW, LMC, and SMC CCs are identical to within the error bars.
§.§ The mass-luminosity relation
Figure <ref> shows the relation between mass and luminosity.
The best fit is:
log L = (3.193 ± 0.060) log M + (1.237 ± 0.048)
based on 208 stars and an rms of 0.12 dex, and lies approximately
0.3 dex above the canonical ML relation from <cit.> for Helium abundance Y= 0.255 and metallicity Z= 0.008.
A few stars scatter clearly above the relation.
The few stars that scatter below the best-fit relation are consistent with the canonical ML relation.
The best-fit relation is the intermediate between the case B (+0.2 dex w.r.t. the canonical relation)
and case C (+0.4 dex) ML relations adopted in <cit.>.
§.§ Comparing stellar parameters
The determination of the photometric effective temperature allows one to compare it to the spectroscopic temperature determined in
the literature.
In addition, via the derived luminosity, and estimated mass, it is possible to compare the gravity to that derived by spectroscopy.
The values for the spectroscopic effective temperature and gravity come from the papers that contribute a large fraction
of the stars in the sample <cit.>.
In addition, spectroscopicly derived parameters from the APOGEE survey have been considered <cit.>, as well
as the values derived from the LC fitting in <cit.> and <cit.>.
The adopted values for temperature and gravity are listed in columns 6 and 7 of Table <ref>.
When data from multiple references were available the order of preferences was <cit.>, <cit.>,
<cit.>, <cit.>, and <cit.> or <cit.>.
Results from GDR3 were not considered in the end. The results from the GSP_Spec analysis <cit.> were inspected
but only one, respectively two, had an entry from the so called Matisse-Gauguin and ANN pipeline when selecting
'000000' for the first six values in the flags_gspspec and flags_gspspec_ann flags.
Figure <ref> shows the comparison of the photometric and literature temperatures.
On average the spectroscopic temperatures seem to be slightly larger, although there is large scatter.
The median offset is +91 ± 330 K.
The effective temperature changes over the pulsation cycle so an exact agreement is in fact not expected.
The spectra taken from the works dedicated to determining the metallicity (and T_ eff and log g in
the process, ) of CCs typically try to avoid the phases where
shocks play a role the objects.
On the other hand the APOGEE data were taken at random phases while the photometric temperatures was derived
from the SED that was constructed to be as much as representative of mean light as possible.
Figure <ref> shows the same for the log g values determined in the literature.
The overall median offset is -0.70 ± 0.36 dex, but it strongly depends on the source.
The log g values derived from the pulsation models are in very good agreement with the values
determined in the present paper, the values from APOGEE are larger (median offset +0.58 ± 0.46 dex), while
those derived from dedicated HR spectroscopy are significantly smaller (median offset -0.80 ± 0.21).
A similar plot for the FWG is shown in the Appendix (Fig. <ref>).
The differences between the spectroscopic and photometric effective temperature are relatively small, but
the spectroscopically determined gravity (and the FWG) from <cit.> and <cit.>
are systematically and significantly smaller than that derived from L, T_ eff, and stellar mass.
Like the effective temperature the gravity also changes over the pulsation cycle.
There is a change in radius, but there is also a dynamical term.
What is thus determined from HR spectroscopy is the effective gravity:
g_ eff = G M/R(t)^2 + ∂^2R/∂t^2 = G M/R(t)^2 - p ∂ V_ r(t)/∂ t,
where M is the mass of the CC, R(t) is the radius as a function of time (or pulsation phase),
p is the projection factor (see and references therein)
and V_ r(t) is the radial velocity at the time t.
Recently, <cit.> presented time series of HR spectroscopy for 20 calibrating MW CCs and the analysis of these spectra in terms of
radial velocities, metallicities, micro turbulent velocities, gravities and effective temperatures (see their Appendix B).
This unique dataset also allows to study the effect of the effective gravity.
The dynamical term ∂ V_ r(t)/∂ t at each phase point ϕ_ i was approximated
as V_ r(ϕ_ i+1)-V_ r(ϕ_ i-1)/Δ t and a typical p factor of 1.25 was adopted.
The effective gravity is calculated as log g_ eff = 10^ log g_mean + (dynamical term), where g_ mean
is the weighted mean gravity over all available phase points (recalculated in the present paper and identical to the values
in Table 6 of ). This ignores the variation in radius with phase but this effect is smaller than the dynamical
term (Appendix <ref>).
Figure <ref> shows the result for δ Cep. The figure is ordered in such a way that the dynamical term appears below
the RV curve and the calculated effective gravity appears below the observed gravity.
Contrary to the convention in <cit.> phase 0 is taken at maximum light.
Plots for some of the stars with the best phase coverage are shown in Appendix <ref> (Figs. <ref>-<ref>).
Overall there is reasonable to good correspondence between the effective gravity and the observed gravity, and the expected rise in gravity
due to the dynamical terms occurs at the correct phase.
The results show that if observations are taken at phases that avoid the sharp rise in radial velocity the effective temperature and gravity
will be systematically lower than the average over the light curve.
However, the effect should be a few 0.1 dex, and cannot explain the large difference between the spectroscopic and evolutionary gravity
noted in Fig. <ref>.
The behaviour of the effective gravity is also confirmed by a theoretical model, see Appendix <ref>.
It is beyond the scope of the present paper, but the analysis of time series data, as presented in <cit.>, allows one
to put constrains or derive the p-factor for individual stars, as the dynamical term is proportional to p and the effective gravity should
match the observed gravity.
§.§ FWG-period and the FWGLR
Figure <ref> shows the relation between the FWG and the pulsation period based on the analysis in the present paper as well as
on the available HR observations for the sample and two identical samples of MW CCs.
The best fit to the LMC objects is:
log g_ F = (-0.856 ± 0.016) log P + (3.442 ± 0.019)
with an rms of 0.057 dex, and is indistinguishable from the best fit to the SMC objects:
log g_ F = (-0.854 ± 0.033) log P + (3.442 ± 0.042)
with an rms of 0.062 dex. The preferred solution combines the SMC and LMC objects as follows:
log g_ F = (-0.853 ± 0.014) log P + (3.442 ± 0.017)
with an rms of 0.059 dex using 212 stars.
This relation is in good agreement both with the theoretical prediction
log g_ F = (-0.834 ± 0.011) log P + (3.402 ± 0.011), derived in based on the models in
<cit.> and the relation
log g_ F = (-0.80 ± 0.03) log P + (3.43 ± 0.03) derived for MW CCs <cit.>.
Of interest are the location of all the coloured points in this plot.
The blue points indicate the CCs and one T2C in EBs and these agree well with the observed relation.
That some appear to be slightly above the relation could be related to an overestimate of the effective temperature, as argued before.
The red circles indicate the objects from the sample where a FWG is available from HR spectroscopy.
Almost all lie clearly above the relation, and those that do appear to be on a line parallel to the derived relation
with on offset of about 0.8 dex (also see Fig. <ref>).
To investigate this further we again used the data from <cit.> on 20 calibrating MW CCs.
These points are the light blue squares[FF Aql is an overtone pulsator and the star is plotted at its fundamentalised
period of 6.401 days.]. Except for the shortest period value (R TrA at P= 3.4 days), which agrees well with the mean relation
all others form a sequence that lies above and is inclined to the mean relation.
All these 20 CCs also have (in part multi-epoch) HR spectroscopy data that is analysed in <cit.>, which was the main source of data
used in our previous study on MW cepheids <cit.>.
These points are the green squares and those fit Eq. <ref> very well.
If we demand that the absolute difference between the observed FWG and that predicted by Eq. <ref> is less than
( δ·√(0.059^2 + σ_ FWG^2)) with σ_ FWG the observed error in the FWG than
all 20 objects from <cit.> obey this relation for δ= 1.5.
For this value of δ only 4 out of the same 20 objects obey this relation using the FWGs from <cit.>
– and even these four stars all lie above the relation – and
only 9[These are LMC 0079, 0461, 2019, 2832, and 3724, and SMC 0431, 0574, 0921, and 4444.] out of 104 stars using
the FWGs as derived from HR spectroscopy in the MCs.
It is beyond the scope of the present paper to investigate this further (see also the discussion and appendix
in , where a similar effect was noticed).
Nevertheless, Fig. <ref> shows the comparison between temperature, gravity, and the differential between <cit.> and
<cit.> for the same 20 objects. Most interesting is the result that there is a correlation
between the two data sets. When for an object T_ eff is larger in <cit.> then in <cit.>, then also
the gravity is larger.
Figure <ref> shows the FWGLR.
Using a least-squares fit taking into account errors in both axes gives the following fit
to the LMC objects:
M_ bol = (3.479 ± 0.032) (log g_ F - 2.5) - (4.390 ± 0.010),
with an rms of 0.16, and a fit to the SMC CCs of:
M_ bol = (3.577 ± 0.097) (log g_ F - 2.5) - (4.390 ± 0.021),
with an rms of 0.12.
The combined fit is the preferred solution and is expressed as:
M_ bol = (3.492 ± 0.028) (log g_ F - 2.5) - (4.388 ± 0.009),
with an rms of 0.12 mag using 207 objects.
Blue filled squares in the plot indicate the six EB CCs and their companions, the filled red square indicates
the T2C in the LMC-T2CEP-098 system and the filled black square its companion. Except for the T2C itself, the CCs, and the companions
agree very well with Eq. <ref>.
Where there is overlap at small FWGs, there is also good agreement with the FWGLR derived for BSGs in the LMC <cit.>.
This demonstrates the power of the FWGLR as the BSGs have masses in the range 12-40 (Fig. 5 in ), while the cepheids
in the present sample have lower masses that are estimated to be in the range 2.8-13.5 (median of about 6 ).
Using evolutionary tracks <cit.> demonstrated that, what they named an "extended," FWGLR is expected over 17 magnitudes
in M_ bol (with a scatter of 0.17 mag below M_ bol= -3.0 mag) and for masses in the range 0.8-40 , which they verfied
using a sample of RGB stars with a typical mass of 1.1 .
§ DISCUSSION AND SUMMARY
This paper is a follow-up of G20 and
<cit.>, where the SEDs of 477 MW cepheids were fitted. All stars had metallicities based on HR spectroscopy from the literature.
Excluding non-CCs and overtone pulsators the PL, the PR and other relations were typically based on about 370 FM CCs.
Some of the relations have been redetermined in the present paper using a restricted range in metallicity to have a sample of MW CCs with
near solar metallicities and these relations are typically based on 190 CCs.
The present study covers 142 LMC and 77 SMC FM CCs. All known (FM) CCs in the MCs with metallicities based on HR spectroscopy are included and
those constitute about half of the sample. Other CCs are included because they were studied otherwise (for example a Baade-Wesselink analyses was conducted)
or may be of interest in future work (ongoing spectroscopic or HST observations).
The advantage of the current sample compared to the MW sample is that the reddening is better established and, in particular, the distance is well known for the MCs.
This means that the PL, PR, and other relations have better determined slopes and smaller residuals compared to the MW relations.
One interesting result is that the zero point of the bolometric PL relation (when fixing the slope to that of the LMC) does not seem to depend on metallicity, contrary
to the recent result that in photometric filters covering a large range in wavelength there is a significant metallicity terms that is essentially constant with
wavelength ( and references therein). A new study of MW CCs with improved distances from DR4 could strengthen this conclusion.
The power of the FWG is again demonstrated. Both the relation of the FWG with period and with luminosity are very tight.
The relation based on the present analysis (gravity derived from the radius, that follows from T_ eff and L, and the stellar mass as derived from several relations)
is in excellent agreement with theory and (where it overlaps) with the relation derived for BSGs.
However, a large fraction of the stars in the sample for which gravities and effective temperatures have been derived from HR spectroscopy show gravities
and FWGs that are smaller than expected by about 0.8 dex.
For the MW sample two recent studies that both analyse time series of HR spectra show strikingly different results in this respect.
The FWGs based on <cit.> lie mostly above the expected FWG-period relation (but less so than for the MCs), while the
FWGs based on <cit.> for the identical sample of 20 stars are in very good agreement with this relation.
Its is beyond this paper to try to resolve this discrepancy as it must be related to the details of the spectroscopic analysis approach.
Of note is that the effective temperature and gravity differences between <cit.> and <cit.> appear correlated.
Since gravity, effective temperature, micro turbulent velocity, and metallicity are determined simultaneously in a spectroscopic analysis it is of interest
to investigate whether these correlations also lead to different metallicity estimates. This appears not to be the case (bottom right panel in
Figure <ref>). Restricting oneself to the 11 stars, where there are 5 or more available spectra per star in <cit.> the difference in
metallicity between <cit.> and <cit.> is 0.00 ± 0.05 dex. Thus, at least at solar metallicities the (correlated) differences
between temperature and gravity determinations do not lead to differences in metallicity. It remains to be seen whether this is also the case
at lower metallicities where the differences in gravity are much larger.
This research was supported by the International Space Science Institute (ISSI) in Bern,
through ISSI International Team project #490, SHoT: The Stellar Path to the Ho Tension in the Gaia, TESS, LSST and JWST Era".
The paper benefitted from the interesting talks and discussions at the memorable "Large-scale surveys as bridges between spectroscopy and photometry"
conference at La Palma, Spain in September 2022.
MG would like to thank
Dr. Valeriy Vasilyev for making the data in <cit.> available in electronic form,
Dr. Bertrand Lemasle for comments on the spectroscopic analysis in the literature, and
Dr. Lucas Marci for pointing out the webpage with the HST programs and target lists.
This work has made use of data from the European Space Agency (ESA) mission Gaia
(<http://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium
(DPAC, <http://www.cosmos.esa.int/web/gaia/dpac/consortium>).
Funding for the DPAC has been provided by national institutions, in particular
the institutions participating in the Gaia Multilateral Agreement.
This research has made use of the SIMBAD database and the VizieR catalogue access tool
operated at CDS, Strasbourg, France.
aa.bst
§ WALRAVEN PHOTOMETRY
<cit.> published Walraven data taken between 1971 and 1978 of CCs in the MCs
at the Leiden southern station in Hartebeespoordam in South Africa. The telescope and photometer were
moved to La Silla observatory at the end of 1978 where the data taking continued from January 1979 onwards.
The move was also used to make several improvements to the system <cit.>.
Table <ref> collects these, as of yet, largely unpublished photometry for CCs (some initial results were presented in
and for HDE 270100).
The HV number, the Heliocentric Julian date (HJD) of the observations, and the V, V-B, B-U, U-W,
and B-L colours on the Walraven system are listed[It is recalled that Walraven photometry is given on a log intensity (I)
scale, not on a magnitude scale.].
The last column lists a quality flag, that ranges between 0 and 9 (in fact, it can be a '*' for
extremely poor observations, but these have been filtered out).
However, photon statistics also play a role for fainter objects.
Light curves were inspected and fitted to determine what typical rms values can be achieved as
a function of the quality flags for the magnitude range of these Cepheids.
In the end an uncertainty of 0.015 is adopted for
quality flags 0-5, and 0.019, and 0.037 in log I
for quality flags 6 and 7, respectively. Points with a quality flags 8 and 9 are excluded from the fitting of the light curves.
For stars already observed by <cit.> the new data were
added to the published data, after applying the following corrections to the data in <cit.>
that reflect the slightly different photometric system and set-up between the 1971-1978 and the later observations.
Referring to these as the "70" and the "80" system, respectively, these corrections are:
V80 = V70 + 0.0417 VB70 - 0.0007 ( std. dev.= 0.0055)
B80 = B70 - 0.0494 VB70 - 0.0003 ( std. dev.= 0.0067)
L80 = L70 - 0.0569 BL70 - 0.0007 ( std. dev.= 0.0075)
U80 = U70 - 0.0151 BU70 - 0.0007 ( std. dev.= 0.0077)
W80 = W70 - 0.2085 UW70 + 0.0007 ( std. dev.= 0.0128)
which are based on a set of about 1000 stars measured at both sites.
The quality flag is not given by <cit.> and an error of 0.015 has been
adopted in VBL, 0.02 in U, and 0.025 in W.
Points marked by a ':' in that paper were excluded.
The procedure in MoD is to use photometric zero points that are determined independently based on a model of Vega using
the respective filter curves
(see for details, and the link to the latest available version of MoD given before).
The calibration constants derived in this way are
-11.184, -10.923, -10.831, -10.808, and -10.684 (in units of log ergs/cm^2/s/Å) in VBLUW, respectively,
and that differ on average by 0.008 from the empirically determined values of
-11.176, -10.914, -10.818, -10.800, and -10.681 (J. Lub's unpublished determination in 2019), that supersede the values of
-11.172, -10.910, -10.818, -10.793, and -10.673 as published in <cit.>.
Although not used in this paper the updated conversions to Johnson V_ J and (B-V)_ J are
V_ J = 6.8819 -2.5 · (V80 + 0.0280 (V80-B80))
( std. dev.= 0.017),
(B-V)_ J = 2.528 · (B80-V80) -0.817 · (B80-V80)^2 +
0.336 · (B80-V80)^3 -0.0133 ( std. dev.= 0.016).
The data were fitted with the code described and used in <cit.> tailored to the Walraven data.
The light curves are analysed using a fixed period, fitting for the mean and the amplitude.
Depending on the number of available data points the first harmonic period was added in the fit, solving for its amplitude as well.
The mean and total amplitude are reported in Table <ref>.
To the error in the mean a value of 0.015 is added in quadrature.
The first entries (HV 824 - 5655) are the stars with new observations,
the latter part (HDE 270100 - HV 12815) are the stars from <cit.> (and for HDE 270100)
with any new data added in the analysis.
An example of the fit to the light curves is shown in Fig. <ref>.
Results of the light curve fitting. Mean VBLUW photometry (first entries).
HV V Amp_ V N B Amp_ B N L Amp_ L N U Amp_ U N W Amp_ W N
824 -2.253 ± 0.015 0.145 ± 0.021 8 -2.685 ± 0.015 0.253 ± 0.030 8 -3.044 ± 0.016 0.329 ± 0.063 8 -3.230 ± 0.019 0.333 ± 0.143 8
847 -2.860 ± 0.015 0.224 ± 0.009 30 -3.239 ± 0.025 0.293 ± 0.049 35 -3.563 ± 0.025 0.366 ± 0.052 32 -3.735 ± 0.021 0.290 ± 0.041 35 -4.140 ± 0.054 0.399 ± 0.078 15
854 -2.959 ± 0.016 0.175 ± 0.010 40 -3.231 ± 0.016 0.277 ± 0.015 40 -3.476 ± 0.016 0.351 ± 0.014 37 -3.727 ± 0.019 0.263 ± 0.028 38 -4.027 ± 0.031 0.286 ± 0.058 16
872 -2.585 ± 0.024 0.380 ± 0.020 49 -2.924 ± 0.029 0.669 ± 0.027 49
876 -2.667 ± 0.018 0.144 ± 0.018 37 -3.107 ± 0.018 0.368 ± 0.023 25 -3.518 ± 0.025 0.532 ± 0.041 24 -3.620 ± 0.021 0.322 ± 0.026 30 -3.979 ± 0.035 0.379 ± 0.060 30
880 -2.849 ± 0.020 0.276 ± 0.018 48 -3.063 ± 0.025 0.420 ± 0.026 48 -3.296 ± 0.030 0.518 ± 0.034 47 -3.554 ± 0.027 0.397 ± 0.028 45 -3.866 ± 0.042 0.427 ± 0.048 34
899 -2.736 ± 0.015 0.137 ± 0.007 6 -3.262 ± 0.016 0.194 ± 0.009 6 -3.761 ± 0.021 0.228 ± 0.028 6 -3.880 ± 0.023 0.165 ± 0.037 6
955 -2.883 ± 0.017 0.156 ± 0.022 32 -3.253 ± 0.018 0.312 ± 0.025 30 -3.576 ± 0.025 0.394 ± 0.045 22 -3.716 ± 0.034 0.474 ± 0.070 21 -4.103 ± 0.019 0.574 ± 0.036 12
969 -3.108 ± 0.015 0.110 ± 0.006 76 -3.466 ± 0.015 0.182 ± 0.010 76 -3.819 ± 0.016 0.308 ± 0.016 73 -3.980 ± 0.017 0.193 ± 0.021 69 -4.144 ± 0.094 0.608 ± 0.102 24
1013 -2.790 ± 0.019 0.105 ± 0.013 19 -3.276 ± 0.030 0.193 ± 0.029 19 -3.735 ± 0.122 0.282 ± 0.138 18 -3.838 ± 0.110 0.254 ± 0.125 16
1345 -3.133 ± 0.022 0.142 ± 0.022 45 -3.445 ± 0.029 0.217 ± 0.034 45 -3.753 ± 0.035 0.301 ± 0.043 43 -3.945 ± 0.047 0.221 ± 0.055 33 -4.207 ± 0.245 0.161 ± 0.276 15
1374 -3.393 ± 0.037 0.169 ± 0.051 13 -3.695 ± 0.040 0.260 ± 0.058 12 -4.015 ± 0.045 0.378 ± 0.066 13 -4.141 ± 0.031 0.230 ± 0.037 10
1610 -2.960 ± 0.027 0.492 ± 0.025 13 -3.671 ± 0.093 0.346 ± 0.098 13
1618 -3.393 ± 0.016 0.164 ± 0.012 51 -3.622 ± 0.016 0.243 ± 0.017 51 -3.819 ± 0.019 0.271 ± 0.025 47 -4.034 ± 0.024 0.175 ± 0.045 21
1705 -3.337 ± 0.016 0.092 ± 0.014 13 -3.680 ± 0.016 0.134 ± 0.015 12 -4.026 ± 0.030 0.217 ± 0.065 12 -4.104 ± 0.039 0.315 ± 0.043 7
1744 -3.077 ± 0.015 0.148 ± 0.010 80 -3.347 ± 0.016 0.225 ± 0.013 80 -3.590 ± 0.017 0.273 ± 0.021 79 -3.814 ± 0.020 0.232 ± 0.033 49 -4.120 ± 0.056 0.253 ± 0.076 17
1768 -3.265 ± 0.042 0.162 ± 0.047 16 -3.543 ± 0.064 0.189 ± 0.074 16 -3.809 ± 0.101 0.384 ± 0.118 16 -4.032 ± 0.109 0.183 ± 0.130 15
1884 -2.975 ± 0.027 0.218 ± 0.029 7 -3.302 ± 0.025 0.317 ± 0.026 7 -3.597 ± 0.044 0.421 ± 0.053 7 -3.796 ± 0.044 0.323 ± 0.052 6
1967 -2.633 ± 0.023 0.283 ± 0.040 22 -3.009 ± 0.021 0.265 ± 0.038 21 -3.331 ± 0.023 0.321 ± 0.046 22 -3.531 ± 0.026 0.269 ± 0.057 22 -3.845 ± 0.028 0.302 ± 0.065 8
2063 -3.227 ± 0.018 0.179 ± 0.013 40 -3.540 ± 0.016 0.225 ± 0.009 32 -3.902 ± 0.026 0.360 ± 0.027 21 -3.688 ± 0.241 0.609 ± 0.250 15
2205 -2.909 ± 0.023 0.198 ± 0.033 28 -3.328 ± 0.021 0.380 ± 0.031 24 -3.747 ± 0.021 0.583 ± 0.026 26 -3.891 ± 0.026 0.429 ± 0.033 28
2209 -2.679 ± 0.015 0.124 ± 0.006 29 -2.958 ± 0.015 0.197 ± 0.011 29 -3.217 ± 0.016 0.254 ± 0.015 29 -3.451 ± 0.016 0.180 ± 0.016 27 -3.788 ± 0.019 0.195 ± 0.032 18
2249 -2.825 ± 0.017 0.269 ± 0.023 58 -3.153 ± 0.020 0.415 ± 0.038 58 -3.471 ± 0.021 0.493 ± 0.045 55 -3.715 ± 0.021 0.350 ± 0.040 52 -4.001 ± 0.028 0.370 ± 0.051 35
2260 -3.211 ± 0.015 0.163 ± 0.009 52 -3.623 ± 0.016 0.253 ± 0.016 52 -4.026 ± 0.018 0.384 ± 0.024 44 -4.166 ± 0.019 0.302 ± 0.031 38 -4.194 ± 0.032 0.635 ± 0.031 6
2299 -2.932 ± 0.017 0.161 ± 0.011 92 -3.267 ± 0.020 0.264 ± 0.018 93 -3.578 ± 0.025 0.335 ± 0.025 92 -3.774 ± 0.025 0.263 ± 0.025 91 -3.983 ± 0.041 0.163 ± 0.048 50
2454 -3.014 ± 0.018 0.152 ± 0.026 56 -3.533 ± 0.020 0.273 ± 0.040 56 -4.020 ± 0.021 0.474 ± 0.059 51 -4.182 ± 0.027 0.416 ± 0.087 48 -4.363 ± 0.032 0.307 ± 0.053 24
2680 -3.163 ± 0.016 0.133 ± 0.016 39 -3.505 ± 0.016 0.218 ± 0.021 39 -3.868 ± 0.021 0.338 ± 0.035 37 -3.998 ± 0.020 0.245 ± 0.048 33
2686 -3.422 ± 0.017 0.140 ± 0.015 25 -3.751 ± 0.018 0.233 ± 0.024 22 -4.202 ± 0.048 0.450 ± 0.058 22 -4.230 ± 0.039 0.114 ± 0.088 19
5655 -3.090 ± 0.020 0.169 ± 0.045 46 -3.532 ± 0.017 0.345 ± 0.028 42 -3.930 ± 0.022 0.535 ± 0.074 41 -4.139 ± 0.021 0.415 ± 0.060 37
270100 -1.951 ± 0.015 0.091 ± 0.003 47 -2.426 ± 0.015 0.170 ± 0.007 42 -2.852 ± 0.015 0.251 ± 0.010 37 -2.977 ± 0.015 0.186 ± 0.010 30 -3.491 ± 0.018 0.220 ± 0.027 26
817 -2.802 ± 0.017 0.164 ± 0.019 34 -3.080 ± 0.017 0.251 ± 0.022 34 -3.322 ± 0.018 0.292 ± 0.031 29 -3.440 ± 0.051 0.158 ± 0.098 22
821 -2.041 ± 0.017 0.131 ± 0.020 34 -2.514 ± 0.017 0.259 ± 0.020 29 -2.945 ± 0.022 0.341 ± 0.040 23 -3.079 ± 0.018 0.285 ± 0.030 25
822 -3.110 ± 0.020 0.228 ± 0.041 15 -3.469 ± 0.018 0.289 ± 0.029 14 -3.785 ± 0.236 0.481 ± 0.451 9 -3.848 ± 0.034 0.346 ± 0.088 10
823 -2.755 ± 0.017 0.169 ± 0.022 21 -3.176 ± 0.019 0.277 ± 0.031 20 -3.455 ± 0.022 0.307 ± 0.044 18 -3.637 ± 0.042 0.261 ± 0.094 13
827 -3.046 ± 0.017 0.139 ± 0.020 33 -3.308 ± 0.016 0.220 ± 0.016 32 -3.532 ± 0.020 0.236 ± 0.036 26 -3.747 ± 0.021 0.210 ± 0.044 22
829 -2.029 ± 0.015 0.130 ± 0.011 33 -2.406 ± 0.016 0.215 ± 0.012 31 -2.734 ± 0.018 0.259 ± 0.029 28 -2.909 ± 0.019 0.235 ± 0.035 26
834 -2.138 ± 0.016 0.111 ± 0.015 43 -2.529 ± 0.017 0.217 ± 0.024 41 -2.864 ± 0.017 0.319 ± 0.025 33 -3.024 ± 0.018 0.248 ± 0.030 37
837 -2.568 ± 0.016 0.171 ± 0.013 36 -2.979 ± 0.016 0.271 ± 0.020 37 -3.315 ± 0.017 0.323 ± 0.024 25 -3.468 ± 0.019 0.284 ± 0.032 23
877 -2.606 ± 0.016 0.127 ± 0.016 22 -3.183 ± 0.016 0.218 ± 0.017 22 -3.640 ± 0.021 0.256 ± 0.041 20 -3.720 ± 0.021 0.281 ± 0.041 11
883 -2.130 ± 0.016 0.206 ± 0.015 28 -2.735 ± 0.017 0.337 ± 0.019 28 -3.188 ± 0.021 0.374 ± 0.044 18 -3.282 ± 0.018 0.343 ± 0.028 21
886 -2.607 ± 0.018 0.219 ± 0.028 20 -3.014 ± 0.018 0.396 ± 0.028 17 -3.166 ± 0.025 0.566 ± 0.039 8 -3.566 ± 0.023 0.458 ± 0.049 13
900 -2.391 ± 0.017 0.183 ± 0.024 22 -2.855 ± 0.020 0.306 ± 0.037 21 -3.246 ± 0.022 0.413 ± 0.048 15 -3.399 ± 0.022 0.380 ± 0.045 17
902 -2.596 ± 0.017 0.189 ± 0.022 21 -2.990 ± 0.021 0.317 ± 0.039 19 -3.425 ± 0.019 0.490 ± 0.035 14 -3.506 ± 0.043 0.381 ± 0.096 13
909 -2.369 ± 0.017 0.207 ± 0.023 29 -2.767 ± 0.018 0.357 ± 0.025 26 -3.138 ± 0.019 0.469 ± 0.030 22 -3.303 ± 0.021 0.405 ± 0.038 26
953 -2.187 ± 0.017 0.178 ± 0.022 22 -2.617 ± 0.018 0.301 ± 0.030 22 -2.986 ± 0.018 0.369 ± 0.026 17 -3.155 ± 0.032 0.347 ± 0.074 6
1002 -2.438 ± 0.018 0.201 ± 0.031 30 -2.855 ± 0.019 0.332 ± 0.032 24 -3.195 ± 0.019 0.496 ± 0.041 15 -3.392 ± 0.021 0.377 ± 0.043 19
1003 -2.569 ± 0.018 0.209 ± 0.028 25 -2.932 ± 0.018 0.331 ± 0.028 22 -3.274 ± 0.019 0.424 ± 0.031 18 -3.415 ± 0.021 0.316 ± 0.040 21
1365 -3.291 ± 0.017 0.138 ± 0.019 42 -3.584 ± 0.017 0.212 ± 0.020 39 -3.808 ± 0.018 0.214 ± 0.028 31 -4.022 ± 0.025 0.164 ± 0.060 23
Column 1: HV number; 270100 refers to HDE 270100 <cit.>.
Column 2-11: Mean values and amplitudes with error bars and the number of data points in the VBLUW filters.
§ MASS ESTIMATES
Table <ref> compiles the mass estimates using the different methods outlined in the main text (also see the table footnote).
The adopted mass is the median among the five estimates.
To estimate the error bar the error in the mass estimate of the median value is added in quadrature to
the median-absolute-deviation times 1.48 (to get the equivalent of one sigma in a Gaussian distribution)
among the five estimates.
§ 2D CEPHEID MODEL
<cit.> present the results of a two-dimensional time-dependent envelope model of
a CC with T_ eff = 5600 K and log g_0 = 2.0 dex. In Figure 5 of <cit.>, the term
g_0 + ∂v/∂t is plotted against time. The time series of various quantities were kindly made
available, and are plotted in a slightly different way in Fig. <ref>.
A Fourier analysis of the velocity time series showed a periodicity of 2.6426 days (and a mean of -7.86 ), which is
used to phase the data. Phase zero is taken at the instance in time when the normalised flux reaches a maximum
for the first time. Consecutive pulsation cycles are plotted with different colours.
The cycles are not very smooth. This is explained by convection, which adds statistical fluctuations
to the velocity and thermal structure of their model (Sect. 2.1 in ).
The integral of the velocity curve is used to calculate the change in radius.
For a mass of 3 , log g = 2.0 dex implies a radius of about 29 .
The top panel shows how g_ eff changes over the pulsation cycle.
The effective gravity is below 100 for 68% of the time, with an average value of 66 cm/s^2 (-0.18 dex),
and above 100 cm/s^2 32% of the time for an average of 166 (+0.22 dex).
The 5 and 95% percentiles correspond to values of ± 0.35 dex.
The effect of the change in radius is almost negligible (of order 5-10 cm/s^2, or ± 0.04 dex at most) compared to the
derivative of the velocity in determining g_ eff.
§ ADDITIONAL FIGURES
|
http://arxiv.org/abs/2307.03965v1 | 20230708123352 | Seismic Signatures of the $^{12}$C($α$, $γ$)$^{16}$O Reaction Rate in White Dwarf Models with Overshooting | [
"Morgan T. Chidester",
"F. X. Timmes",
"Ebraheem Farag"
] | astro-ph.SR | [
"astro-ph.SR"
] |
red
Signatures of ^12C(α, γ)^16O in WD models with overshooting
Chidester, Timmes, & Farag
0000-0002-5107-8639]Morgan T. Chidester
School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA
0000-0002-0474-159X]F.X. Timmes
School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA
0000-0002-5794-4286]Ebraheem Farag
School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA
Morgan T. Chidester
[email protected]
We consider the combined effects that overshooting and the reaction rate have on variable white dwarf stellar models. We find that carbon-oxygen white dwarf models continue to yield pulsation signatures of the current experimental reaction rate probability distribution function when overshooting is included in the evolution. These signatures hold because the resonating mantle region, encompassing ≃ 0.2 in a typical ≃ 0.6 white dwarf model, still undergoes radiative helium burning during the evolution to a white dwarf. Our specific models show two potential low-order adiabatic g-modes, g_2 and g_6, that signalize the reaction rate probability distribution function. Both g-mode signatures induce average relative period shifts of Δ P/P = 0.44 % and Δ P/P = 1.33% for g_2 and g_6 respectively. We find that g_6 is a trapped mode, and the g_2 period signature is inversely proportional to the reaction rate. The g_6 period signature generally separates the slower and faster reaction rates, and has a maximum relative period shift of Δ P/P = 3.45%. We conclude that low-order g-mode periods from carbon-oxygen white dwarfs may still serve as viable probes for the reaction rate probability distribution function when overshooting is included in the evolution.
§ INTRODUCTION
Helium burning is primarily the fusion of helium into carbon by the triple-alpha (3α) process.
All stars born with more than ≃ 0.5 go through this stage of energy production as they evolve beyond the main-sequence <cit.>.
Helium burning also plays a key role in transients such as
Type I X-ray bursts <cit.>,
Type Ia supernovae <cit.>, and
He-rich subdwarf O stars <cit.>.
Helium burning also impacts several classes of distribution functions,
such as the black hole mass distribution function <cit.>
including any mass gaps based on the pair-instability mechanism in the evolution of
massive stars <cit.>.
He burning is triggered by the 3α process releasing 7.5 MeV in fusion energy and producing ^12C <cit.>.
This is a unique process, setting stringent conditions for helium ignition.
The 3α process is followed by the α capture reaction ^12C(α, γ)^16O,
converting the ^12C into ^16O <cit.>.
These two isotopes are the principal products of He burning.
In addition, nearly all of a star's initial CNO abundances in the stellar interior are converted to ^22Ne at the onset of He burning <cit.>.
This marks the first time in a star's life where the core becomes neutron rich. We follow the convention that ^22Ne is the “metallicity” of a carbon-oxygen (CO) white dwarf (WD).
The interiors of CO WDs are, in principle, the best probe of the ashes of He burning.
A goal of WD seismology is to characterize the chemical profiles of principal products of He burning
<cit.>
and the chemical profile of the trace ^22Ne metallicity <cit.>.
Furthermore, regions within a CO WD model that burn helium radiatively during its prior evolution can offer potential constraints on the He burning nuclear reaction rates.
For example, <cit.> found that certain trapped adiabatic g-modes in WD models
may provide a pulsation signature that constrains the experimental reaction rate probability distribution function.
These signature g-modes were shown to resonate
with the region of the CO WD model that underwent radiative He burning during its previous evolution. The innermost boundary of this resonant cavity
corresponds to the molecular weight gradient at O→C chemical transition, and the outermost boundary to the molecular weight C→He chemical transition.
The resonating region encompasses ≃ 0.2 of a typical ≃ 0.6 WD model.
C22 cautioned that the chemical structure and resulting pulsation spectrum
is sensitive to
the width of the O→C transition <cit.>,
the experimental 3α reaction rate probability distribution functions <cit.>,
convective boundary mixing processes during core He depletion <cit.>, and
the number of thermal pulses during the Asymptotic Giant Branch (AGB) phase of evolution <cit.>.
Modeling convective boundary mixing processes at the convective-radiative interface during core He burning in low- and intermediate-mass stellar models is currently uncertain
<cit.>.
Convective overshoot occurs because the convective boundary is not the location where convective velocities are zero,
but the location where the buoyant acceleration of the fluid is zero.
An order–of-magnitude expression Δ x = u Δ t provides an estimate for how far convective motions overshoot <cit.>.
Here Δ x is the overshoot distance, u is the convective velocity, and
Δ t ≃ 1/N where N is the frequency
in the stable region. There is disagreement on how to calculate Δ x, but this estimate
broadly shows Δ x ≪ H_P in stellar environments, where H_P is the pressure scale height.
The exponential overshoot parameterization <cit.> is frequently implemented in 1D models to describe this convective boundary mixing process, treating Δ x as a free parameter.
The values of Δ x
needed to match the gravity modes found in Slowly Pulsating B-type stars <cit.> suggest Δ x / H_P ≃ 0.1, which is larger than 3D hydrodynamical simulations of low Mach number flows at stable interfaces indicate <cit.>.
The injection of fresh He into the convective core enhances the rate of energy production by the ^12C(α,γ)^16O reaction rate, increases the central mass fraction <cit.>, and modifies the lifetime through this phase of evolution.
The resulting increase in the radiative gradient can also lead to rapid growth in the convective He core boundary (a “breathing pulse”).
A consensus on breathing pulses being physical or numerical has not yet been reached <cit.>.
C22 found a pulsation signature of the reaction rate probability distribution function using evolutionary models that purposely excluded overshooting.
This article is novel in analyzing whether or not pulsation signals of the reaction rate probability distribution function
still exist when overshooting at the inner convective-radiative interface during core He burning (CHeB) is included in the models' evolution history. Here, the inner convective-radiative interface is the transition from the convective core to the exterior radiative layer.
Section <ref> describes our models,
<ref> analyzes our models,
<ref> discusses our results,
and we summarize our findings in <ref>.
Appendix A lists the microphysics used, and
Appendix B discusses variations with the number of isotopes in the reaction network and with the temporal resolution of our models.
§ STELLAR EVOLUTIONARY MODELS
We define the term “model” to mean an evolutionary sequence that begins at the pre-main sequence, progresses through CHeB, and terminates as a cold WD. We define the term “snapshot” to mean a specific instance in time or phase of evolution within a model, and the term “set” to mean a suite of models or snapshots that have identical input physics except for the value of the reaction rate.
We use version r15140
<cit.> to build 2.1 ,
Z = 0.0151 metallicity, Y = 0.266 He mass fraction, nonrotating models at the pre-main sequence.
We adopt the AGSS09 <cit.> abundances and use a 23 isotope nuclear reaction network with ^22Ne being the heaviest isotope[A comparison to a 30 isotope network is given in Appendix B.].
Our models employ 's Henyey mixing-length theory (MLT) of convection option, with an MLT parameter of α = 1.5. This is consistent with the value used in C22.
We use the Ledoux criterion, and the predictive mixing scheme.
Additional details of the microphysics are listed in Appendix A.
One such model is run for each 0.5σ step in the experimental reaction rate probability distribution function <cit.> from σ=-3.0 to σ=+3.0, giving 13 reaction rates.
As in C22, we span the current experimental reaction rate probability distribution function <cit.> from σ=-3.0 to σ=+3.0 in 0.5σ steps, totaling to 13 σ_i reaction rates; each model is prescribed one such σ_i reaction rate value for its evolution.
We calculate one set of models without overshooting (NOV), and a second set with overshooting (OV) at the inner radiative-convective interface during the CHeB phase.
Hence, each evolutionary model differs only in its σ_i reaction rate, and NOV or OV mixing prescription. This yields 26 individual stellar evolutionary models; 13 for the NOV set and 13 for the OV set. For i=(-3.0, -2.5,...,+2.5, +3.0), we use σ_i and σ=i interchangeably to reference a given σ from the reaction rate probability distribution function.
After CHeB, the models evolve until log(L/L_⊙)=3.0, prior to the first thermal pulse on the AGB. At this snapshot, we interrupt the evolution of each model. All models at this snapshot thus have a C→He transition at nearly the same mass location. We use this snapshot to construct H-dominated atmosphere (DA) WDs by removing the H envelopes until log(M_H/M_*)<-3.5.
The resulting composition profile structures are used to build 0.56 ab-initio WD models with wd_builder, as done in C22. These WD models evolve until = 10,000 K. We discuss the reasoning for constructing the WDs from the post-CHeB log(L/L_⊙)=3.0 snapshot in the following section.
We use this snapshot to isolate the sensitivity to overshooting at the convective-radiative interface.
At this snapshot the H envelopes are removed until log(M_H/M_*)<-3.5 to construct H-dominated atmosphere (DA) WDs.
These composition profile structures are used to build 0.56 ab-initio WD models with wd_builder, as done in C22. These WD models evolve until = 10,000 K.
We utilized version 6.0.1 of the code <cit.> to compute the adiabatic pulsations of our WD models throughout their respective cooling tracks (from ∼ 50,000 K to 10,000 K). We tracked the pulsations for the entire WD cooling track to observe the evolution of the adiabatic modes. Further, this was the most convenient way to auto-implement pulsation calculations for multiple models (i.e. we did not have to post-process the pulsation calculations over a specifed range for each of the 26 models). We emphasize that the computed pulsations are adiabatic, and that the observed instability strip for DAV WDs spans only from ∼ 13,000 K to ∼ 10,000 K. The inlist parameters were set to search for modes of harmonic degrees ℓ=1,2 and radial orders n≤25, where our models were assumed to be non-rotating, hence only m=0 azimuthal orders were present. For the adiabatic mode analysis, we employed the fourth-order Gauss-Legendre collocation difference equation scheme <cit.>.
Details of the models and oscillation parameters are in the files to reproduce our results at
at doi:[10.5281/zenodo.8126450][https://doi.org/10.5281/zenodo.8126450.
§.§ Core Overshooting prescription during the CHeB
During the CHeB phase, we use the following core overshooting parameters in the inlist for the OV set:
= 1d-3
= `exponential'
= `any'
= `core'
= 0.016
= 0.008
= 0.01
= 0.4
Details of the specific parameters are described in the documentation[<https://docs.mesastar.org/en/latest/>].
We choose the conventional <cit.> value of .
This parameter sets the fractional distance of H_p to overshoot at the ∇_ad=∇_rad interface, for the order of magnitude estimate given in the introduction, Δ x = f_0· H_p.
The trapped mode seismic signatures found in C22 were resonating most with the region that underwent radiative He burning, defined as R2. Their inner boundary of R2 is near the molecular weight gradient at the
O→C transition (the “O drop") and their outer boundary is near the C→He transition. Mode trapping is sensitive to the location of both of these boundaries because they define the width of the resonant cavity.
One approach to analyzing the sensitivity
of the R2 trapped mode signatures is to fix one boundary and vary the other boundary. We fix the R2 outer boundary by excluding variations imposed from the thermal pulse history, hence the interruption at the post-CHeB log(L/L_⊙)=3.0 snapshot for all models. The phenomena that happens during the AGB phase is another source of model uncertainty. <cit.> found that early post-AGB pulsations can cause rapid growth of an instability that drives a super-wind which can shed much of the outer layers in a few years. Further, their 2.0 , Z=0.02 model shows a dynamic evolutionary track, especially during the AGB, that is similar to the models in this article. <cit.> summarizes that while the preliminary results show promise on future AGB and post-AGB phenomenon, there are currently more questions than answers. We therefore leave the thermal pulse history and the particular envelope ejection phenomena on the AGB to future studies, and freeze the outermost R2 boundary before the first thermal pulse occurs. In this vein, we isolate the sensitivity of the R2 region to its inner boundary, and specifically address how core overshooting influences the pulsation signatures for the reaction rate probability distribution function.
We end this section by stating we are not advocating for a specific evolutionary model or overshooting scheme.
Rather, we are exploring one approach to quantifying the coupled uncertainty between the reaction rate probability distribution function and a common overshooting model.
§ RESULTS
§.§ Evolution of Composition Profiles
Figure <ref> shows the mass fraction profiles for both sets at three evolutionary snapshots. The top row shows the mass fraction profiles for the NOV set and the bottom row shows the mass fraction profiles for the OV set. The left most column
shows the mass fraction profiles at the post-CHeB log(L/L_⊙)>3.0 snapshot. At this point, our models have not lost much mass and are all ∼2.1. The middle column shows the mass fraction profiles after removing the H envelopes until log(M_H/M_*)<-3.5. This snapshot shows the initial hot WD profiles, after completing one model step in wd_builder. The profiles shift slightly in mass location, but the overall composition structure only differs from the left panel in the thickness of the H envelope. The right column is the final snapshot of the mass fraction profiles, when the models reach =10,000 K. Diffusion was included on the WD cooling track and leads to the smoothness of the profiles in this column.
Figure <ref> accentuates the differences between the NOV (top) and OV (bottom) mass fraction profiles for the final WD structures (right column of Figure <ref>). Here, we show the abundance in mass fraction with respect to fractional radius r/R. We partition the WDs' composition profiles into four regions: R1, R2, R3, and R4. This is similar to that done in C22. The regions are defined to estimate trapping (resonant) zones. Boundaries for mode trapping are typically near composition transitions because they generally have large mean molecular weight gradients. This may lead to partial reflections for a resonant mode(s), “trapping" it within the local cavity <cit.>. The Ledoux B profile (henceforth B) captures composition gradients and can estimate trapping regions. We use B as our primary guide to define the region boundaries for a given model. The R1-R2 boundary is set at the first local maximum in B that occurs after reaching peak in a given model's chemical profile. The R2-R3 boundary is set at the second local maximum in B. The R3-R4 boundary is set at the location where X(^1H)>X(^4He).
In both NOV and OV sets, σ_i impacts the magnitude of the ^16O and ^12C profiles in R1. Core overshooting changes the structure of these profiles, especially at r/R ∼ 0.37 where the flatness of the profiles becomes disrupted. This is due to additional He fuel ingested during CHeB, from overshooting and/or convection. The fuel ingestion from overshooting and convection is a coupled effect and specific to each σ_i model. After r/R ∼ 0.37, there is some overlap in the profiles that perturbs the proportional trend with σ_i.
For both sets, the first group of vertical blue lines marks the R1-R2 boundary, with each line representing a given σ_i. The NOV set shows a steep composition gradient at the R1-R2 boundary, and the R1-R2 location is nearly the same for all σ_i. There is greater variance in the R1-R2 location for the OV set. Further, core overshooting has softened the and gradients, and the disruption of the profiles' regularity with σ_i continues into the start of the R2 region. At r/R∼0.6, the proportionality of σ_i to the and profiles is restored.
By design from stopping at the first thermal pulse, the R3 and R4 regions are almost identical between the NOV and OV sets. These regions are least affected from mixing processes in the core (e.g. overshooting).
In Figures <ref> and <ref>, the OV chemical profiles show a non-constant structure from overshooting during CHeB in the O dominated central core (below ≃0.4 ). While element diffusion is included during the white dwarf cooling phase, these chemical profiles may be further flattened by mixing processes not considered in this study such as time-dependent convection <cit.>, rotationally induced mixing, semiconvection, thermohaline mixing, or first-order phase separation of the CO mixture <cit.>.
§.§ Evolutionary differences after the main-sequence
How do the NOV and OV differences in the R1 and R2 regions of Figure <ref> relate to their respective CHeB evolution histories?
How do the final WD profiles for the NOV and OV sets in Figure <ref> relate to their respective CHeB evolution histories? Figure <ref> shows the Kippenhahn diagrams for the σ = 0.0 models for NOV (left) and OV (right). This figure shows the CHeB phase until the log(L/)>3.0 termination point, spanning ≃ 0.93–1.10 Gyr. During this period the total mass of our models is ≃ 2.1 , but we show only the innermost ≃ 0.65 to capture the evolution history that ultimately defines the CO WDs.
There are immediate differences between the NOV and OV CHeB evolution histories for the σ=0.0 models. These differences are similar for any given σ_i models, and a link to an interactive figure is provided in the online journal to see each rate's OV vs. NOV comparison in greater detail.
For the NOV set, we see gradual growth of the convective core throughout the CHeB phase; the noted central mass fraction isotopes smoothly deplete/grow to reach their final mass fractions; the convective cores have no apparent splitting during the CHeB phase. Further, there is a pure radiative zone throughout the CHeB history. In comparison, the OV set shows convective cores that ebb and flow in their extent, in a saw-tooth like manner; overshooting extends past the inner convective core in a fairly consistent mass length; the OV central mass fraction isotopes ebb and flow symmetrically with the mixing phenomena at any given time.
We also see splittings of the convective core in the OV set. These splittings were not observed in any of the NOV models during the CHeB phase. We presume they are a result of overshoot inclusion. This introduces “pollution" to the pureness of the radiative burning zone, which becomes the R2 region of the WD. The pollution is seen by observing that some of the split-convection zone surpasses the log(L/)>3.0 R2 inner edge boundary. This boundary becomes the inner edge of R2 in the cool WDs. The amount of convective pollution within the OV set is minor for σ_0.0, but varies with σ_i.
Figure <ref> qualifies R2 as “Mostly Radiative" for the NOV set due to localized, short-lived, subtle convective occurrences between ≃ 0.30–0.35 near core He depletion energetics. Composition profiles are less sensitive to mixing after CHeB is complete. Any convective pollution from these brief convective periods in the NOV set are insignificant compared to the convective pollution introduced in the OV set.
For both sets, nuclear burning primarily takes place within the convective core. Both sets also show similar burning regions in the mantle outside the core, in the radiative zone. Near the end of core He depletion, nuclear burning in the core extends past the convective and overshooting core regions in the OV set, and burns into the radiative zone. This is not seen in the NOV set.
§.§ WD Adiabatic Pulsation Analysis
How do these evolutionary and WD structural differences impact the WD reaction rate pulsation signatures? We first stress the importance of the NOV models' R2 pure radiative zone during the CHeB. The trapped mode σ_i signature found in C22 resonates the most with this region.
We want to determine if this signature, or any other σ_i pulsation signature, exists when overshooting is considered at the inner R2 boundary during CHeB. First we compare the NOV WD pulsation signatures in this work to those in C22.
§.§ NOV set vs. C22
In this section we briefly describe the main differences between the NOV and C22 models. The models in C22 used a 30 isotope chemical network compared to the 23 isotope network used here. See Appendix B for a comparison. Also, the temporal resolution was greater in C22, especially through CHeB. The most important difference in the NOV models is that we terminated the evolution prior to the first thermal pulse; the models in C22 continued the evolution through the thermal pulse phase of evolution. The overall composition structure of the R1 and R2 regions in our NOV models are quite similar to those in C22.
The NOV set of models in this work found two WD g-mode signals for σ_i rather than one. This is shown in the top two panels of Figure <ref>. Both panels show snapshots of the percent period differences as a function of σ_i, at =11,500 K (bright green) and =10,000 K (blue) respectively. The y-axis label defines the period differences as (P_σ_0-P_σ_i)/P_σ_0. That is, they are normalized to the pulsation periods of the σ=0 NOV model. The first panel is the signal from g_2 and the second is the signal from g_6. In C22,
the g-mode signature was a trapped mode. Trapped modes are identified from local minima in the kinetic energy diagram <cit.>. The NOV kinetic energy diagrams for all σ_i at these snapshots are shown in the bottom left and right panels of Figure <ref>, following Equation 2 in C22
<cit.>. The figure caption explains the coloring for σ_i. At =11,500 K (bottom left panel), the first apparent trapped mode occurs at g_6 for all σ_i, with the exception of σ=0.5, which has its first local minimum of E_kin at g_5. By =10,000 K (bottom right panel), all σ_i have the first local minimum in E_kin at g_6, including σ=0.5. This is important as g_6 is one of our signature modes for σ_i. These findings are in overall agreement with C22.
The trapped g_6 mode signature is not linear with σ_i, but overall shows σ_i<0 to have longer periods than σ=0.0, and σ_i>0 to have shorter periods than σ=0.0.
The R2 contribution to the g_6 period in our NOV models was ∼ 25%. Other regions equally contributed between ∼ 20-30%, meaning that the trapped mode from our NOV set is more equitably trapped among the four regions. Thus, its credibility from R2 isn't as strong as in C22.
Nonetheless, it is not a negligible contribution and can still serve as a viable probe for σ_i.
Our other g-mode signal, g_2, does not appear to be trapped by definition (see other highlighted mode in bottom of Figure <ref>). However, the g_2 period differences are directly proportional to σ_i (first panel of Figure <ref>). This suggests that g_2 is likely distinguishing CO features in the inner regions better than other g-modes. The additional g_2 signal
was either recovered or contrived as a consequence of excluding the thermal pulse history in the evolution. This was the only procedural difference between our models and those in C22.
The direct impact of this procedural difference is expressed by the nearly uniform and profiles after the C→He transition (see Figure <ref>).
C22 showed variations in these profiles that stemmed from variations in the thermal pulse histories. Eliminating such chemical variations near the R2-R3 interface can placate the g-modes' sensitivity to the R3 and R4 regions, especially for low-order g-modes such as g_2. Figure 9 in
C22 shows g_2 distinguishes σ_i in their thinner atmosphere sequence of models. Thinner atmospheres may also lessen sensitivities to outer regions, allowing lower-order g-modes like g_2 to probe deeper into the CO interior. We therefore suspect g_2 is a viable probe for σ_i if there are uniform composition profiles at the R2-R3 boundary, and/or thinner WD atmosphere models.
We conclude that our NOV pulsation signature results are overall consistent with C22;
we find certain low-order adiabatic WD g-modes which probe the reaction rate probability distribution function. With our two signature modes established, we now discuss the impact that overshoot inclusion has on these pulsation signatures.
§.§ Detailed Analysis of Differences
We first show the pulsation periods as a function of surface temperature for all σ_i models in Figure <ref>. Black dots mark the NOV periods and grey dots mark the OV periods. G-modes with radial orders n=1-10 are annotated, all for ℓ = 1. Figure <ref> shows that there are differences in the periods between the NOV and OV sets, but there is no global systematic offset; the differences between the OV and NOV periods for any given g-mode is random. This is the case even when σ_i is constant. We find that g_6 shows the largest spread in the periods of the models. Further, the kinetic energy diagrams for all models show that g_6 was a trapped mode by =10,000 K for every model, regardless of the σ_i, NOV/OV prescription. Since g_6 is one of the signals for σ_i in the NOV models, we point out this feature in Figure <ref>. We will touch on the cause of the larger spread later, but now focus our attention on the detailed pulsation properties of the signature g_2 and g_6 modes.
Figure <ref> shows, from top to bottom, the mass fraction profiles, B, and the g_6 and g_2 mode weight functions ζ for the final WDs at =10,000 K. The left and right columns are the NOV and OV results respectively. Here, we show the comparison for σ=0.0, but an interactive figure link is provided in the online article to compare these properties for any σ_i. For all σ_i, NOV/OV comparisons, the dotted vertical lines mark the region boundary locations in each panel. This is useful to compare where the boundary locations are across multiple profile properties. For instance, the R1-R2 boundary marks the C→O transition region, the first most prominent peak in B, and the first peak-like features in g_6 ζ and g_2 ζ in the NOV case. Comparing the OV column to the NOV column, we see the global impacts from overshooting. Overall, prominent features in the NOV set are lessened in magnitude in the OV set. The C→O transition is more gradual, lessening the composition gradient at the defined boundary. This remarkably impacts the shape of B. The first prominent peak after max(O) is much less in magnitude for all σ_i, and is not the only outstanding peak near the boundary. There are now multiple, smaller peaks in B and the g_6 ζ near the R1-R2 boundary as opposed to one.
There are slight deviations between NOV and OV in these profiles for the R3 and R4 regions of the WD, but the R1→R2 region in these profiles was affected most.
The g_6 ζ and g_2 ζ panels in Figure <ref> note the weight percentages per region in the WD. This tells each region's contribution to the overall mode period (frequency). An interesting result for all σ_i is that both the g_2 and g_6 modes decrease the amount of weight in R1 when overshoot is included, and increase the amount of weight in R2. There is also a slight decrease in the weight of R3 for g_2 for all σ_i when overshoot is included. These results are important. The R2 region is the most reliable region in terms of extracting the σ_i rate signature. When overshoot is included, the R2 contribution to the overall pulsation modes in g_2 and g_6 are accentuated, implying that these modes more reliably distinguish σ_i than the NOV set. A quantitative analysis of each region's weight percentage contribution per σ_i is given for both sets in Table <ref> and Table <ref> for g_2 and g_6 respectively. Overall, Table <ref> shows that R2 and R3 are the most heavily weighted regions for g_2's period. G_6 has more equitable weight dispersed across regions, but the combined weight of R1 and R2 accounts for ∼ 50 % of the g_6 period for any given model. As identified in Figure <ref> and Figure <ref>, R1 and R2 are the most impacted regions in this study. A g-mode with about half its weight from those regions may pick up the detailed differences more so than modes weighted more in outer regions. This may explain why Figure <ref> shows a larger spread in the g_6 periods as this g-mode is likely picking up the R1 and R2 contributions to its period better than other g-modes.
|c|c|c|c|c|c|c|c|c|[!htb]
9
g_2 Weight Function Percentages Per WD Region
1|c|
2c|R1
2c|R2
2c|R3
2c|R4
1|c|σ_i
1cNOV
1c|OV
1cNOV
1c|OV
1cNOV
1c|OV
1cNOV
1c|OV
-3.0 0.91 0.75 40.6 41.3 57.0 56.4 1.47 1.47
-2.5 1.14 0.99 40.2 44.2 57.2 52.9 1.43 1.94
-2.0 1.05 0.52 40.2 41.1 57.2 56.9 1.54 1.53
-1.5 1.18 0.53 39.5 41.7 57.9 56.2 1.50 1.50
-1.0 1.16 0.27 40.4 41.5 56.9 56.8 1.48 1.46
-0.5 1.15 0.18 38.8 42.1 58.6 56.3 1.43 1.49
0.0 1.25 0.38 40.6 42.0 56.6 56.1 1.52 1.47
0.5 1.44 0.49 40.8 41.9 56.2 56.2 1.52 1.47
1.0 1.28 0.31 40.4 41.4 56.9 56.7 1.49 1.58
1.5 1.32 0.28 39.9 41.4 57.2 56.8 1.50 1.51
2.0 1.35 0.19 39.4 40.8 57.8 57.5 1.50 1.49
2.5 1.25 0.42 38.3 41.6 58.9 56.6 1.47 1.45
3.0 1.39 2.06 40.2 39.6 56.9 56.8 1.59 1.52
|c|c|c|c|c|c|c|c|c|[!htb]
9
g_6 Weight Function Percentages Per WD Region
1|c|
2c|R1
2c|R2
2c|R3
2c|R4
1|c|σ_i
1cNOV
1c|OV
1cNOV
1c|OV
1cNOV
1c|OV
1cNOV
1c|OV
-3.0 25.5 20.1 25.6 32.4 21.1 19.8 27.8 27.8
-2.5 33.1 19.1 29.5 33.5 13.1 20.2 24.2 24.2
-2.0 32.3 16.6 30.8 36.3 13.9 19.7 23.0 23.0
-1.5 33.5 17.3 29.6 39.1 12.6 17.3 24.4 24.4
-1.0 33.8 13.4 30.0 43.1 12.9 17.4 23.3 23.3
-0.5 33.5 11.7 29.8 47.5 12.8 14.9 23.9 23.9
0.0 33.2 15.4 28.9 42.8 12.0 15.5 25.9 25.9
0.5 26.6 16.4 22.5 41.0 13.8 14.0 37.1 37.1
1.0 31.2 14.1 27.1 43.8 12.4 16.1 29.3 29.3
1.5 32.2 13.7 27.4 46.7 12.2 14.7 28.3 28.3
2.0 25.5 11.7 23.0 48.1 14.1 14.3 37.3 37.3
2.5 30.9 14.2 28.0 42.5 12.5 13.8 28.6 28.6
3.0 30.1 32.0 25.5 26.2 12.4 13.8 32.0 32.0
When an integer multiple q of the local radial wavelength λ_r for a given g-mode nearly matches the width of a certain region(s) in a star, the g-mode resonates with that region(s). Figure <ref> shows q·λ_r (R_⊙) as a function of radius R (R_⊙) for the g_2 and g_6 modes. The NOV set doesn't show any particular close matches for any region. But the closest matches to the R2 width were the λ_r curves of g_2, q=1, and g_6, q=2. Further, the g_2, q=2 and g_6, q=3 modes were best at resonating with R3. Larger q values may show stronger resonance with R4. The resonance with R2 is enhanced in the OV set. The g_2, q=1 and g_6, q=2 λ_r curves match much more closely to the R2 width in the OV set. This implies that overshoot has enhanced the g-mode resonance for our signature modes in the region that was constructed mainly from radiative burning (Figure <ref>). We also see stronger resonance within the R1 region with the g_2, q=1 λ_r curve.
Will the differences between the NOV and OV sets in Figure <ref> impact the WD σ_i pulsation signatures shown in Figure <ref>? Figure <ref> shows the resulting relative period percent differences, as a function of σ_i at =11,500 K (bright green) and =10,000 K (blue). The period differences are negative for σ_i with longer periods than the σ=0 model, and are positive for σ_i with shorter periods than the σ=0 model for the given NOV or OV set. The left of this figure shows the period differences for g_2, and the right shows the period differences for g_6. The NOV set is indicated by the dotted lines and the OV set is the solid lines.
Looking at g_2, the period differences between NOV and OV at =11,500 K are minimal; both sets show a trend of decreasing period with increasing σ_i. At =10,000 K, the OV set shows an overall decrease in the percent differences, and a slightly greater variation in the overall σ_i vs. g_2 period difference shape. However, at both temperatures, the same pattern of the g_2 period decreasing with increasing σ_i is sustained with overshoot inclusion.
Further, the magnitude of percent differences, ranging from ≃ -1.5 to +1.0, is within the detectable threshold <cit.>.
The OV set shows greater deviation from the NOV line of period percent differences in g_6 more-so than g_2. This is most likely because g_6 is more sensitive to changes from R1 than g_2. Nonetheless, despite the σ_=-0.5 and σ_+1.0 outliers, the overall trend remains: σ_i<0 generally have longer periods than σ_0 and σ_i>0 generally have shorter periods than σ_0. Once again, the magnitude of the relative period percent differences surpass the observable threshold.
An interesting note is that for both g_2 ad g_6 signals, the percent differences change more in the NOV set as the models cool from =11,500 to =10,000 K than the OV set. The OV set showed nearly the same period differences at both temperatures.
§ DISCUSSION
C22 found pulsation signature(s) for the experimental reaction rate probability distribution function. They describe four sensitivities that may impact this result: width of the O→C transition, mixing during CHeB, thermal pulse history on the AGB, and the 3α reaction rate.
This work investigated the impact that overshoot inclusion had on the reaction rate pulsation signature(s). Doing so, we address the width of the O→C transition and mixing during CHeB. Further, by ignoring the thermal pulse history in our models, we also address the sensitivity to the number of thermal pulses, albeit, the trivial case when the number of thermal pulses is zero. In the following paragraphs, we discuss how these three sensitivities impacted our results. We further caution how our results could be impacted from further sensitivity investigations.
Including overshooting overall increased the width of the O→C transition for all σ_i cool WDs. This lessened the sharp peak in B at the O→C transition, and decreased the peak in g_6 ζ at the O→C transition. While the transition peak was lessened and dispersed into R2, widening the O→C transition shows an enhancement of both the weight contribution to the R2 region for g_2 and g_6, and the R2 resonance with λ_r for g_2 and g_6. The widening of the O→C transition was from the combined effects of overshoot inclusion and the σ_i prescription. We conclude that widening the O→C transition imposes differences in B, ζ, and the pulsation periods. Despite these changes, we still find the g_2 and g_6 relative period differences in the NOV and OV sets to distinguish the reaction rate probability distribution function. Namely, the pattern of decreasing period with increasing σ_i persisted in both NOV and OV sets. By itself, the inclusion of overshooting does not destroy the seismic signatures of the reaction rate in our WD models – which was the primary question of this study.
We caution that increasing (decreasing) the width of the O→C transition in CO WD models could potentially yield different results. Our CO WD models were informed from their evolution history, with the stated model parameters. Thus, an increase (decrease) of the width of the O→C transition may come from choosing different mixing processes, prescriptions and parameters, such as for convection and overshooting. A change in the width of the O→C transition may also come from mixing processes not considered in this study such as
time-dependent convection <cit.>, rotationally induced mixing, semiconvection, thermohaline mixing,
or first-order phase separations of the CO mixture <cit.>.
Ignoring the thermal pulse history gave an additional low-order adiabatic g-mode signature for σ_i, namely the g_2 signal. This signal was not found in C22, where the thermal pulse history was included. Future studies on the thermal pulse phase of evolution with different temporal and spatial resolutions are needed to determine the sustainability of the g_2 signal as a probe for σ_i.
Concurrently, future studies could also explore the interaction, if any, between the thermal pulses and overshooting during CHeB on the chemical profiles.
The CO cores of WDs are the result of the competition between 3α and during CHeB. An experimental 3α reaction rate probability distribution function, similar to the existing one for
<cit.>, does not yet exist to our knowledge, although a probability distribution function could be constructed using the STARLIB reaction rate library <cit.>.
Future studies involving both reaction rate probability distribution functions could probe properties of DAV WD models in the 3α rate - rate plane. For example, the 3α reaction rate is likely to slowly modulate the central ^16O mass fraction at any reaction rate because 3α controls the production of ^12C. The reaction rate will likely modulate the central ^16O mass fraction more strongly at any 3α reaction rate. We speculate that the radiative region R2 will exist in all such models. We also suspect that all such models, whether terminated at the first thermal pulse or evolved through the thermal pulse phase, will show a trapped mode, with substantial trapping from R2, that best probes the ^12C(α, γ)^16O burning reaction rates (i.e. g_6 in this work, and see Figure 9 in C22). We caution that the relative period shifts we find in this work from considering the probability distribution and overshooting may change when a 3α reaction rate probability distribution function is also considered.
<cit.> found that including overshooting impacted ensuing WD pulsations by ∼ 2-5 s.
Their results were independent of their reaction rate uncertainty evaluation. We combined the effects of overshooting and the reaction rate sensitivities in our pulsation analysis, and likewise find period differences of similar magnitudes. Our reaction rate analysis spanned the current experimental probability distribution function, which analyzed different rate values than those explored in <cit.>. They concluded that the uncertainty was less relevant than overshooting. In this study, we find that the combined effects from overshooting and the reaction rate probability distribution function yields remarkable differences in the structure of the CO WDs, and pulsation differences. Despite these differences, we still find pulsation signatures for σ_i.
We conclude this section by discussing the physical meaning of our results. Overall, both g_2 and g_6 signatures generally state that the periods decrease with increasing σ_i. Put another way, increasing the amount of in the WDs shortens the periods of these signature modes. This trend was also seen in <cit.>, namely, as the amount of [22] was increased in the WDs, the periods, for all g-modes analyzed, were shorter. The reasoning of the result came from analyzing the components of the frequency equation. One of the largest drivers of the period differences was due to an increase in pressure scale height with increasing [22] abundance. If one likens pressure scale height to tension in a string, increasing the tension in a string will shorten its period. WDs are not strings, but the line of reasoning is analogous.
One might wonder why not all g-modes display this trend? Why is it only g_2 and g_6? In <cit.>, the presence/absence of [22] was throughout ∼99% of the WD's composition structure. Thus, a uniform increase (decrease) in [22] impacts all regions of the WD equally, which is likely the reason for the global offsetting of periods for all g-modes. In comparison, increasing and decreasing the reaction rate imposes a coupled effect on both and , which is not uniform for all regions in a WD's structure. The R1 and R2 regions are most affected by the reaction rate, with some impact on the inner part of the R3 region. Our above analysis found that the R1 and R2 regions gave larger contributions to the the periods of the g_2 and g_6 signature modes more-so than other g-modes. This is the most probable reason why only certain modes are capable of distinguishing the reaction rate, within the conditions of the present analysis.
§ SUMMARY
We conducted a search for signatures of the current
experimental reaction rate probability distribution function in the pulsation periods of CO WD models with the inclusion of overshooting. We found two signature adiabatic g-modes that show period differences with the reaction rate probability distribution function σ_i trend regardless of whether or not overshoot is included. We find a g_2 period difference signature is inversely proportional to σ_i. Without overshoot, the g_2 relative period differences span ± 0.9%. With overshoot, the g_2 relative period differences range from -1.33% to 0.47%. The average magnitude of the relative period differences for g_2 were 0.46% and 0.44% respectively. The g_6 period differences were larger in magnitude, spanning from -3.44% to 1.78% for NOV and -2.02% to 1.58% for OV. The average magnitude of the g_6 period differences were 1.21% and 0.95% respectively. The average magnitudes of the g_2 and g_6 period differences were slightly decreased from the NOV set.
We found that the R2 weight contribution to these g-modes was enhanced with overshoot inclusion. The R2 region remains the best identifying region for tracing the reaction rate probability distribution function. This is because even with overshoot inclusion, it is predominantly constructed by radiative burning during CHeB.
Regardless of whether or not overshooting is considered, we find:
* two signature g-modes, g_2 and g_6 probe σ_i
* g_2 is inversely proportional to σ_i and g_6 is a trapped mode
* the g_2 and g_6 periods are generally shorter for positive σ_i and longer for negative σ_i
* both signatures have period deviations within the detectable regime
These findings suggest that an astrophysical constraint on the reaction rate probability distribution function remains, in principle,
extractable from the period spectrum of observed variable WDs.
§ ACKNOWLEDGEMENTS
We thank James Deboer for sharing the ^12C(α,γ)^16O probability
distribution function, Josiah Schwab for sharing wd_builder,
and Pablo Marchant for sharing mkipp.
We acknowledge using ChatGPT <cit.> to polish the language of one paragraph <cit.>.
This research is supported by NASA under the Astrophysics Theory Program grant NNH21ZDA001N-ATP, and in part by the National Science Foundation under Grant No. NSF PHY-1748958.
This research made extensive use of the SAO/NASA Astrophysics Data System (ADS).
<cit.>,
20190830 <cit.>,
wd_builder <https://github.com/jschwab/wd_builder>,
<cit.>,
mkipp <https://github.com/orlox/mkipp>,
<cit.>,
<cit.>, amd
<cit.>.
§ MICROPHPYSICS IN MESA
The MESA EOS is a blend of the OPAL <cit.>, SCVH
<cit.>, FreeEOS <cit.>, HELM <cit.>,
PC <cit.>, and Skye <cit.> EOSes.
Radiative opacities are primarily from OPAL <cit.>, with low-temperature data from <cit.>
and the high-temperature, Compton-scattering dominated regime by
<cit.>. Electron conduction opacities are from
<cit.> and <cit.>.
Nuclear reaction rates are from JINA REACLIB <cit.>, NACRE <cit.> and
additional tabulated weak reaction rates <cit.>. Screening is included via the prescription of <cit.>.
Thermal neutrino loss rates are from <cit.>.
§ MODEL OPTIMIZATION AND RESOLUTION
§.§ Reduced Chemical Network
The nature of our evolutionary models is computationally expensive. This paper is concerned about overshooting and the reaction rate probability distribution function, which primarily dictate the evolutionary processes and consequences of the CHeB phase. The isotopes most impacted during CHeB are , , and . and are the next two most impacted isotopes during CHeB. We thus optimize the efficiency of our models by reducing the chemical network number of isotopes from 30 to 23. The eliminated isotopes are ^21Ne, ^21,22,23Na, ^23,24Mg, and ^56Fe. A comparison of the resulting inner mass fraction profiles for the 5 most abundant isotopes for both networks is shown in Figure <ref> for each chemical network. This figure shows the profiles at the completion of CHeB. both network models used the same temporal and spatial resolution during CHeB. The run-time was reduced from a few days to a a few hours on 12 cores. All resolution studies were conducted with σ=0.0 without overshoot (NOV).
Reducing the network impacted [22] most, with an offset of ∼ 22% more [22] in the 23 isotope network. We note that C22 used a 30 network and our overall signature results persistent through variations in heavier isotopes.
§.§ Temporal Resolution
Several timestep limiters in help optimize convergence studies. In this paper, we want to limit the timestep to achieve the temporal resolution that yields a smooth evolution of the central , , and abundances during CHeB. We first utilize the delta_XC_cntr_limit limiter. This limits the amount the central abundance can change in a given timestep. To help optimize computational run-time, we begin limiting the change in central during CHeB which the central helium abundance X(_c)<0.6. This is done by adding the following lines of code in the run_star_extras.f90 file:
This temporal resolution was used for the 30 and 23 isotope network models. We refer to it as resolution A. The remaining temporal resolution studies were performed using the 23 isotope chemical network.
The next iteration of increased temporal resolution modified the run_star_extras.f90 file to include the following:
This resolution is employed slightly earlier during CHeB, when X(_c)<0.5. We added limits to the change in central temperature and density from resolution A. This is resolution B.
Our third resolution iteration used the following limiter controls in the run_star_extras.f90 file:
This is resolution C. We have set the limiters at the start of CHeB, and have decreased the limiter values from those in resolution B.
A comparison for resolutions A, B, and C are shown in Figure <ref>. In each column, the solid light curves represent resolution A, the dotted curves B, and the dark solid curves C.
The left figure shows the evolution of central abundances of , , and during CHeB, starting when X(_c)≲0.6 until the completion of CHeB. The central abundances for resolutions A and B are nearly identical. Resolution C varies slightly, with the final central abundance reaching a slightly larger amount than resolutions A and B. Further, all three resolutions show a smooth evolution of these central abundances throughout CHeB.
The middle plot in Figure <ref> shows the mass fraction profiles at the completion of CHeB. We show the 5 most abundant isotope profiles for each resolution. The and profiles for A are noticeably different than the profiles for B and C, especially after the O→C transition. This is more apparent in the right plot of Figure <ref>, which zooms in on the and profiles of the three resolutions. Resolution B follows A in the core, but then more closely aligns with C after the O→C transition. Since resolutions B and C agree well, with only a slight difference in the central and abundance, we set resolution C as the standard temporal resolution for our 13 models.
aasjournal
|
http://arxiv.org/abs/2307.05270v1 | 20230711140412 | APRF: Anti-Aliasing Projection Representation Field for Inverse Problem in Imaging | [
"Zixuan Chen",
"Lingxiao Yang",
"Jianhuang Lai",
"Xiaohua Xie"
] | eess.IV | [
"eess.IV",
"cs.CV"
] |
APRF: Anti-Aliasing Projection Representation Field for Inverse Problem in Imaging
Zixuan Chen,
Lingxiao Yang, Member, IEEE,
Jianhuang Lai0000-0003-3883-2024, Senior Member, IEEE,
and Xiaohua Xie0000-0002-0310-4679 Member, IEEE
Manuscript received XXX XX, XXXX; revised XXX XX, XXXX. This work was supported in part by the National Natural Science Foundation of China under Grant 62072482.
(Corresponding author: Xiaohua Xie.) All the authors are with the School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China; and with the Guangdong Province Key Laboratory of Information Security Technology, Guangzhou 510006, China; and also with the Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, Guangzhou 510006, China. (e-mail: [email protected]; [email protected]; stsljh@mail. sysu.edu.cn; [email protected].)
August 12, 2023
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Sparse-view Computed Tomography (SVCT) reconstruction is an ill-posed inverse problem in imaging that aims to acquire high-quality CT images based on sparsely-sampled measurements.
Recent works use Implicit Neural Representations (INRs) to build the coordinate-based mapping between sinograms and CT images.
However, these methods have not considered the correlation between adjacent projection views, resulting in aliasing artifacts on SV sinograms.
To address this issue, we propose a self-supervised SVCT reconstruction method – Anti-Aliasing Projection Representation Field (APRF), which can build the continuous representation between adjacent projection views via the spatial constraints.
Specifically, APRF only needs SV sinograms for training, which first employs a line-segment sampling module to estimate the distribution of projection views in a local region, and then synthesizes the corresponding sinogram values using a center-based line integral module.
After training APRF on a single SV sinogram itself, it can synthesize the corresponding dense-view (DV) sinogram with consistent continuity.
High-quality CT images can be obtained by applying re-projection techniques on the predicted DV sinograms.
Extensive experiments on CT images demonstrate that APRF outperforms state-of-the-art methods, yielding more accurate details and fewer artifacts.
Our code will be publicly available soon.
Imaging Inverse Problem, Sparse-View Computed Tomography (SVCT), Implicit Neural Representation (INR), Self-Supervised Learning, Anti-Aliasing.
§ INTRODUCTION
Computed Tomography (CT) is a diagnostic imaging procedure that uses a combination of X-rays to observe the internal structure of the scanned objects.
It consists of two steps: i) emitting X-rays in a circle around the scanned subjects and storing the information about attenuation properties at each projection angle in the sinograms; ii) using re-projection techniques like <cit.> to transform the sinograms into CT images.
Acquiring high-quality CT images requires densely-sampled projection views, which means that subjects must be scanned for long periods of time without moving.
Exposure to such prolonged ionizing radiation may increase the risk of cancer in subjects <cit.>.
Consequently, Sparse-View Computed Tomography (SVCT) reconstruction, i.e., reconstructing CT images based on sparsely-sampled measurements, can significantly reduce the ionizing radiation from CT scans, which is of great concern in the field of public healthcare.
SVCT reconstruction is a highly ill-posed inverse problem.
Since directly using the conventional re-projection techniques <cit.> and <cit.> may produce severe artifacts, early studies <cit.> combined <cit.> with the regularization terms for optimization.
Subsequently, with the development of deep learning, many methods <cit.> proposed different Convolutional Neural Networks (CNNs) to learn the mapping between SV sinograms and CT images.
These CNN-based methods are fully-supervised, requiring considerable SV sinogram and CT image pairs for training.
Recently, implicit neural representation (INR) techniques <cit.> have yielded impressive performance in computer vision for numerous tasks.
Many studies <cit.> utilize the INR techniques to handle the SVCT reconstruction challenges.
One representative strategy is to build the coordinate-based field that directly represents the sinograms <cit.>.
The other strategy is to build the coordinate-based representation field of CT density, and then synthesize the corresponding sinograms based on the differentiable Radon transformation <cit.>.
However, these INR-based methods have not considered the spatial correlations between adjacent projections, producing blurry contents <cit.> and severe artifacts <cit.> (see Figure <ref>).
To address the above issues, we propose an Anti-Aliasing Projection Representation Field (APRF), which aims to model the correlation between adjacent projection views from the SV sinograms.
Unlike the above CNN-based methods, our APRF is a self-supervised SVCT reconstruction method, which is trained on the SV sinograms without the reference of CT images.
Specifically, we propose a line-segment sampling module to randomly sample N projection angles within a spatial region.
To synthesize corresponding sinogram values, we also propose a center-based line integral module, which employs a numerical integral on the sampled coordinates.
After optimization, the internal regions between adjacent projection views can be indirectly modeled by the spatial constraints, and thus APRF can yield the dense-view (DV) sinogram with consistent continuity.
By applying re-projection techniques like <cit.> on the predicted DV sinograms, our APRF can acquire high-quality CT images (see Figure <ref>).
Comprehensive experiments on COVID-19 <cit.> and KiTS19 <cit.> datasets demonstrate the superiority of our APRF, whose reconstruction quality surpasses state-of-the-art methods on parallel-, fan- and cone-beam SVCT images.
In summary, the main contributions are:
* We discover that existing INR-based methods are vulnerable to aliasing errors in the projection domains, and propose an anti-aliasing SVCT reconstruction method.
* We propose a line-segment sampling module and a center-based line integral module to alleviate aliasing errors via spatial constraints.
* We conduct a series of experiments to demonstrate the effectiveness of our model.
§ RELATED WORKS
In this section, we first briefly review the Implicit Neural Representation (INR) techniques and their derived applications in computer vision and medical image reconstruction.
We then introduce some impressive advances in sparse-view computed tomography (SVCT) reconstruction, including some very recent INR-based methods.
§.§ Implicit Neural Representation
Modeling a continuous representation from discretely sampled data is a long-standing problem in image reconstruction.
Recent studies propose implicit neural representation (INR) techniques to solve this ill-posed problem, which aim to build a coordinate-based representation field from the collected samples.
Specifically, these INR-based methods use the Multi-Layer Perceptron (MLP) to encode the coordinates and learn the mapping between coordinates and training samples.
After training, based on the image continuity priors, these coordinate-based representation fields can produce the i.i.d samples corresponding to the training set.
INR techniques have yielded impressive advances in image reconstruction for numerous tasks: single image super-resolution <cit.>, video super-resolution <cit.>, novel view synthesis <cit.>, generative modeling <cit.>, and editing <cit.>.
Recently, some researchers have tried to adapt INR techniques to the medical domain.
Corona-Figueroa et al. <cit.> adapted <cit.> to synthesize novel projection views with training on multi-view projection images, while Chen et al. <cit.> proposed the cube-based modeling strategy to reform NeRF <cit.> for upsampling medical images at arbitrary scales.
§.§ Sparse-View Computed Tomography Reconstruction
Sparse-View Computed Tomography (SVCT) reconstruction aims to acquire CT images based on sparsely-sampled measurements, which can significantly reduce ionizing radiation.
Initial studies proposed analytical reconstruction methods: the Filtered Back-Projection (FBP) <cit.>, and the iterative methods: <cit.> to transform the sinograms into CT images.
However, these methods have difficulty in obtaining high-quality CT images from the sparse-view (SV) sinograms and produce severe artifacts in their results.
To eliminate the artifacts, <cit.> combined <cit.> with the regularization terms for optimization.
With the advent of Convolutional Neural Networks (CNNs), <cit.> built the CNN-based models to address the challenges of SVCT reconstruction.
<cit.> reformed the U-Net <cit.> architecture to learn the mapping between the sparse- and full-view CT pairs on a large dataset, while <cit.> designed dense blocks to fuse hierarchical features.
Recent works <cit.> use INR techniques to construct the coordinate-based mapping between SV sinograms and CT images.
These INR-based methods can be divided into two groups: i) <cit.> simulate the density field of CT images and synthesize SV sinograms via differentiable FBP <cit.> techniques for optimization; ii) <cit.> build the coordinate-based representations of SV sinograms and apply FBP <cit.> on the synthesized DV sinograms to acquire CT images.
§ METHOD
In this section, we first analyze the reasons why existing INR-based methods suffer from aliasing errors in the projection domains.
Subsequently, we propose Anti-Aliasing Projection Representation Field (APRF) to address this problem, which consists of two modules: line-segment sampling and center-based line integral.
These two modules are introduced in the following subsections, and we also demonstrate why the aliasing errors can be eliminated by them.
Finally, we report the details of our methods, including spatial normalization and the architecture of Multi-Layer Perceptron (MLP).
The overall architecture is depicted in Figure <ref>.
As shown, in the training stage, APRF trains the models on a single SV sinogram itself.
APRF employs the proposed line-segment sampling module (a) and center-based line integral module (b) to learn the mapping between the sparsely-sampled coordinates and the corresponding sparse-view (SV) sinogram pixels.
In the test stage, given a dense coordinate set, APRF first synthesize the corresponding DV sinograms via (a) and (b), and then applies FBP <cit.> on the predicted results to reconstruct CT images.
§.§ Analysis on Aliasing Errors in the Projection Domains
Existing INR-based SVCT reconstruction methods aim to build the mapping between position coordinates and sinogram pixels.
However, we discover that these methods have not considered building the correlation between adjacent projections, leading to aliasing errors in the projection domains.
We provide a vivid example in Figure <ref> (a) to explain our findings.
As shown, these INR-based methods can only model the points corresponding to sparsely-sampled projection angles, while the internal regions between adjacent projection views have never been directly modeled during training.
These unmodeled spaces may cause aliasing errors in building dense projection views, yielding blurred results <cit.> and severe artifacts <cit.> (see Figure <ref>).
To alleviate aliasing errors, we argue that it is required to build the correlation between adjacent projection views.
As shown in Figure <ref> (b), sampling a line-segment region instead of a single point can build the distance-based correlation within the unmodeled spaces.
After optimization, the model can build the continuous representation between adjacent projections, reducing the aliasing errors.
To achieve our motivation, we propose line-segment sampling module to sample projection angles within a line-segment region.
We also propose a center-based line integral module to synthesize corresponding sinogram values.
After optimization, the internal regions between adjacent SV projection views can be indirectly modeled by the spatial constraints, and thus APRF can yield the dense-view (DV) sinogram with consistent continuity.
§.§ Line-Segment Sampling
Implicit neural representation (INR) techniques aim to learn the continuous representation of projection views from sparse-view (SV) sinograms.
However, the above analysis demonstrates that existing INR-based SVCT methods suffer from aliasing errors in the projection domains, leading to blurry results and severe artifacts.
To address this issue, we propose a novel sampling strategy: line-segment sampling, which samples a line-segment region instead of a single point to build the correlation between adjacent projection views.
Specifically, to predict the sinogram pixel at the location O_t, we construct a line-segment region ℓ(O_t, ρ) with the center of O_t, where ρ is the length of that line segment.
To estimate the distribution of ℓ(O_t, ρ), we first samples N points within that sampled region as a point set {x_i}_i=1^N.
Each point is sampled by:
x_i∼ U[ℓ(O_t, ρ)],
where U denotes the uniform distribution.
Then we feed the point set {x_i}_i=1^N and the center O_t into a multi-layer perceptron (MLP) network F_Θ to predict a set of density {σ_i}_i=1^N and intensity {c_i}_i=1^N within the sampled region ℓ(O_t, ρ) by:
σ_i, c_i = F_Θ(γ(O_t), γ(x_i)),
where γ(·) denotes the positional encoding introduced in <cit.>, which can be formulated as:
γ(y)=y⋃^ω-1_i=0(sin(2^iy), cos(2^iy)), where ω∈ℕ,
where y is an arbitrary input vector, ω is set to 10 as default.
Finally, since σ_i and c_i denote the density and intensity of x_i, we can build the continuous function σ(O_t, ·) and c(·) to estimate the distribution of the line-segment regions ℓ(O_t, ρ) via these discrete samples.
Note the density function is related to (O_t, x_i) while the intensity function is only related to x_i, because we assume the density distribution of each line-segment region is independent, while the intensity of all points is fixed.
§.§ Center-based Line Integral
To obtain the sinogram pixel 𝐂(O_t,ρ) at O_t, an intuitive solution is to calculate the line integral as:
𝐂(O_t,ρ) =1/ρ∑σ_i∫_0^ρσ(O_t, ν)c(O_t + ν)dν,
where ν is the distance between each sampling point x_i to the starting point O_t-ρ/2, and 1/ρ∑σ_i is the normalization term to ensure the integral output within a reasonable range.
However, Eq. <ref> neglects the image continuity priors within the sampling region ℓ(O_t,ρ), leading to sub-optimal results.
Inspired by <cit.> that assigns the nearby pixels with similar potentials, we assume the density of each point x_i within the line-segment region ℓ(O_t,ρ) is attenuated from the center O_t to the ends.
We employ the Lambert-Beer Law to estimate the distribution of attenuation coefficients within the line-segment region ℓ(O_t, ρ), and thus the proposed center-based line integral can be formulated as:
𝐂(O_t,ρ) = ∫_0^ρ/2c(O_t + ν)(1-exp(-σ(O_t,ν)))/exp(∫_0^νσ(O_t,ν'))dν'dν,
where ν=O_t-x_i denotes the distance between each sampling point x_i and the center O_t.
Given N sampling points by the proposed line-segment sampling, we first sort these points by the distance between the center O_t and themselves.
Subsequently, the above center-based line integral can be approximated via numerical quadrature rules:
𝐂̂(O_t,ρ)=∑^N-1_i=11-exp(-σ(O_t,ν_i)(ν_i+1-ν_i))/exp(∑_j=1^iσ(O_t, ν_j)(ν_j+1-ν_j))c(O_t + ν_i),
where 𝐂̂(O_t,ρ) denotes the approximated results of 𝐂(O_t,ρ).
Given the predicted result 𝐂̂(O_t,ρ) at Eq. <ref> using N sorted sampling points, APRF can be optimized in the following MSE loss ℒ:
ℒ = 1/len(ℬ)∑_O_t∈ℬg.t.-𝐂̂(O_t,ρ)_2^2,
where ℬ denotes the SV coordinates in a batch, and g.t. is the corresponding SV sinogram pixels.
§.§ Spatial Normalization & Multi-Layer Perceptron
Spatial Normalization.
To build the coordinate-based representations field for the given SV sinograms, we first normalize the coordinates from the projection domains to the field spaces within the range of (-1, 1) to adapt the positional encoding γ(·) in Eq. <ref>.
For example, given a L×H×W 3D sinogram, we convert the sinogram coordinate (l, h, w) into the corresponding field coordinate O_t by:
O_t=(2l-L/L+2P, 2h-H/H+2P, 2w-W/W+2P, )
where P set to 1 is a padding size to restrict O_t within the range of (-1, 1), L denotes the number of projection views, H and W indicate the height and width of detector plane.
The Architecture of Multi-Layer Perceptron.
We reform the multi-layer perceptron (MLP) architecture introduced in <cit.> to adapt the proposed line-segment sampling.
As shown in Figure <ref>, we employ the positional encoding in Eq. <ref> to enrich the feature of O_t and x_i before feeding them into the MLP.
Since σ_i is related to (O_t, x_i) while c_i is only corresponding to x_i, we decouple the features of σ_i and c_i.
Specifically, we first feed γ(x_i) into the MLP to predict the intensity c_i, and then we concatenate the features of intensity and the input center γ(O_t) to predict the density σ_i.
Because the proposed center-based line integral is differentiable, the MLP can be optimized by minimizing the MSE loss ℒ in Eq. <ref> between the predicted results and SV sinogram pixels.
§ EXPERIMENTS
In this section, we conduct extensive experiments and in-depth analysis to demonstrate the superiority of the proposed APRF for sparse-view computed tomography (SVCT) reconstruction.
We utilize the same hyperparameters and model settings in all experiments for fair comparisons.
§.§ Experimental Details
Compared Methods.
We conduct a comprehensive comparison over 6 state-of-the-art methods, including 1 conventional analytical reconstruction method: FBP <cit.>; 1 fully-supervised deep learning-based method: FBPConvNet <cit.>; and 4 recent self-supervised INR-based methods: CoIL <cit.>, SCOPE <cit.>, GRFF <cit.> and NeRP <cit.>.
FBP is re-implemented by ODL <cit.>, we follow the settings released on the official tutorials[https://github.com/odlgroup/odl/tree/master/examples/tomo] to deal with the re-projection of parallel, fan and cone X-ray beam sinograms.
For the fully-supervised methods: FBPConvNet[https://github.com/panakino/FBPConvNet] <cit.>, we use the MatConvNet toolbox <cit.> to train them on the training set extracted from COVID-19 <cit.> dataset.
Specifically, for each X-ray beam (parallel, fan and cone X-ray beams) and projection number (45, 60, and 90), we re-train a specific model for the corresponding CT reconstruction.
The maximum iteration is set to 100 (epochs) and the learning rate is decreasing logarithmically from 1e^-2 to 1e^-3.
We employ Adam <cit.> as the optimizer and batch size is set to 8.
For the self-supervised INR-based methods, we directly train them on a single SV sinogram itself to reconstruct CT images.
Since the output of CoIL <cit.>, SCOPE <cit.> and our APRF is a DV sinogram, we apply the same re-projection process on the predicted DV sinograms to obtain the CT images.
Note that the results of CoIL[https://github.com/wustl-cig/Cooridnate-based-Internal-Learning] <cit.>, SCOPE[https://github.com/iwuqing/SCOPE] <cit.>, GRFF[https://github.com/tancik/fourier-feature-networks] <cit.> and NeRP[https://github.com/liyues/NeRP] <cit.> are obtained using their official implementations.
Datasets.
The experimental data are selected from 2 publicly available CT datasets: COVID-19 <cit.> and 2019 Kidney Tumor Segmentation Challenge (KiTS19) <cit.> datasets.
Since the proposed APRF is the self-supervised method, i.e., only trained on the test CT image itself at test time without the demands of large training data, we do not require to build an extra dataset for training.
1) COVID-19 <cit.> dataset, containing considerable 3D CT volumes from 1000+ subjects with confirmed COVID-19 infections, is a large-scale CT dataset for the clinical study of COVID-19 virus.
We randomly select 600 2D slices from 100 subjects, each slice is extracted on axial view from the 3D CT volumes.
Specifically, we select 500 slices from 50 subjects as the training set, 60 2D slices from 10 subjects as the validation set, and 40 2D slices from the rest 40 subjects as the test set.
All the 2D CT slices have the same image dimension of 512×512.
Note that the training and validation sets are only employed to train the fully-supervised methods: FBPConvNet <cit.>, while FBP <cit.>, CoIL <cit.>, SCOPE <cit.>, GRFF <cit.>, NeRP <cit.> and the proposed APRF, are only trained on the test SV sinograms itself to reconstruct CT images.
2) 2019 Kidney Tumor Segmentation Challenge (KiTS19) <cit.> dataset consists of arterial phase abdominal CT scans from 300 unique kidney cancer subjects who underwent partial or radical nephrectomy.
Note that FBPConvNet <cit.> is only designed for 2D SVCT reconstruction, and thus we only prepare the test set to evaluate the performance of FBP <cit.> and other self-supervised INR-based methods.
Specifically, we randomly select 40 3D CT volumes, each 3D CT volume is resized into 256×256×80 image dimension.
Dataset Simulation.
For thoroughly evaluating the reconstruction quality of SVCT with various X-ray beams, we follow the strategies in <cit.> to simulate parallel, fan and cone X-ray beam sparse-view (SV) sinograms by projecting the raw CT images using Operator Discretization Library (ODL) <cit.>.
Specifically, we follow the above-mentioned official settings to extract parallel- and fan-beam SV sinograms from 2D CT slices, and generate parallel- and cone-beam SV sinograms from 3D CT volumes.
Each SV sinogram is generated at different projection views (45, 60, and 90).
Following the self-supervised settings, APRF is trained on a test SV sinogram itself, while the raw CT images are only used as ground truths for evaluation.
Note that the parallel, fan and cone X-ray beam SVCT are considered as three independent reconstruction tasks, and thus we solely conduct the training and test processes of each X-ray beam.
Evaluation Metrics.
We employ three commonly-used objective image quality metrics: Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM) <cit.>, and LPIPS <cit.> to evaluate the performance of the compared methods for 2D and 3D SVCT reconstruction.
PSNR is based on pixel-to-pixel distance while SSIM utilizes the mean and variance of images to measure structural similarity.
Unlike the above conventional metrics, LPIPS is an objective perceptual similarity metric based on deep learning.
In our experiments, PSNR and SSIM are re-implemented by scikit-image <cit.> packages, while we use the default settings of LPIPS for evaluations.
Implementation Details.
Our method is implemented on top of <cit.>, a Pytorch <cit.> re-implementation of NeRF <cit.>.
Meanwhile, our experiments are also based on Pytorch <cit.> framework, and run on a single NVIDIA RTX A6000 GPU.
The length ρ of the line segment ℓ(O_t,ρ) is set to 2/L+P, which means it depends on the number of projection views (i.e., long in SV sinograms but short in DV ones).
For training, we sample N_train=65 points within a line segment ℓ(O_t,ρ) and then feed the sampling points into the multi-layer perceptron (MLP) F_Θ.
We employ Adam <cit.> as the optimizer with a weight decay 1e^-7 and a batch size 2048.
The maximum iteration is 40000 for all CT images and the learning rate is annealed logarithmically from 3e^-3 to 2e^-5.
For test, we first generate DV sinograms by uniformly sampling 720 partitions within [0, π], and then apply FBP <cit.> re-implemented by ODL <cit.> to reconstruct the CT images.
The number of sampling points is set to N_test=(N_train - 1)η/720 + 1, where η is the number of projection views preseted in training.
It means the number of test sampling points is 16sim8 times smaller than in training for the 45∼90-view sinograms, reducing considerable computational costs.
Runtime & Parameters.
The average training time of the proposed APRF for 2D SVCT reconstruction is about 5 mins while training for reconstructing 3D SVCT demands about 15 mins.
Since the number of sampling points in the test stage is much smaller than in training, the inference time for a single 2D SV sinogram with 800×512 size is only about 1.4 secs, while synthesizing a 3D SV sinogram with 400×400×80 size requires about 40 secs.
It is worth noting that APRF is a lightweight model, which only contains 0.55M parameters.
§.§ Comparison with State-of-the-art Methods
In this subsection, we first report the quantitative and visual comparisons between the proposed APRF and the 6 above-mentioned state-of-the-art methods.
Subsequently, we summarise and analyze the experimental results.
Quantitative Comparison.
As reported in Tables <ref> and <ref>, the proposed APRF favorably surpasses all the competitors for 2D and 3D SVCT reconstruction, yielding consistent preferable performance under the various number of projection views.
The outperformance suggests APRF achieves better sample efficiency than state-of-the-art methods, obtaining higher reconstruction quality under the same number of projection views.
Especially, compared to the exhibited methods, the performance of APRF has significant growth with the increasing number of projection views.
Moreover, the experimental results also demonstrate that APRF can handle various X-ray beams, outperforming the other methods for parallel-, fan- and cone-beam SVCT reconstruction.
Visual Comparison.
We visualize the results of APRF and other competitors for 2D and 3D SVCT reconstruction in Figures <ref> and <ref>, respectively.
As shown, the difference maps of GRFF <cit.> and NeRP <cit.> contain conspicuous artifacts at object boundaries, suggesting GRFF <cit.> and NeRP <cit.> generate blurry contents in their results.
In contrast, though CoIL <cit.> and SCOPE <cit.> preserve more details, they also produce severe artifacts in the inner regions of objects.
Compared to the exhibited methods, the proposed APRF achieves better visual verisimilitude to the GT images, producing more accurate details and fewer artifacts.
§.§ Ablation Study & Parametric Sensitivity Analysis
In this subsection, we conduct comprehensive experiments to prove the correctness of the model design.
We first carry out an ablation study to investigate the effectiveness of the proposed modules.
Subsequently, we evaluate the APRF's parametric sensitivity under different parameter settings.
For fair comparisons, each variant is trained with the same experimental settings, including the same positional encoding in Eq. <ref>, loss in Eq. <ref>, and the maximum iterations.
APRF's Ablation Variants.
We conduct a thorough evaluation against ablation variants of the proposed APRF with each module: LSS, CLI, cnt. and dec., which indicate the line-segment sampling, center-based line integral, adding the center O_t into input vector, and decoupling the density and intensity features in MLP, respectively.
The baseline model is directly feeding the points to synthesize the corresponding sinogram pixels.
For fair comparisons, the MLP of each ablation variant has a similar parameter size (±0.02M).
As reported in Table <ref>, our baseline model (row 1) can be conspicuously improved by the proposed LSS (row 2), outperforming the most state-of-the-art methods on 60 projection views.
Employing the proposed CLI instead of the line integral introduced in Eq. <ref> can also significantly enhance the reconstruction performance (row 3).
Besides, since cnt. and dec. (rows 4 and 5) can further improve the reconstruction quality under similar computational costs, demonstrating the effectiveness of our modification against input vectors and MLP architecture.
APRF under Different Parameteric Settings.
To analyze the parameteric sensitivity of the proposed APRF, we evaluate its performance under different parameteric settings: “ρ=1/L+P”, “ρ=4/L+P”, and “N_test=N_train”, where “ρ=1/L+P” and “ρ=4/L+P” denote the different length of line-segment regions ℓ(O_t, ρ), while “N_test=N_train” indicates using the same number of sampling points as N_train in test stage.
The default setting is introduced in Section <ref>, where ρ=2/L+P and N_test=(N_train - 1)η/720 + 1.
Since the default length can only cover the whole coordinate-based representation fields, the 0.5× length: “ρ=1/L+P” may leave some unmodeled spaces in the fields, while the 2× length: “ρ=4/L+P” may produce blurry results.
Besides, the default number of N_test is 8∼16× smaller than N_train for 45∼90 projection views.
As reported in Table <ref>, the default version (row 1) achieves consistent preferable performance at various X-ray beams.
As expected, “ρ=1/L+P” and “ρ=4/L+P” (rows 2 and 3) are inferior to the default one, but they still achieve comparable performance to state-of-the-art methods on 60 projection views.
Though “N_test=N_train” can slightly improve the default performance (row 4), it takes 8∼16 multiple of computational costs to deal with 45∼90 projection views, which is not cost-efficient.
§.§ Analysis
We compare our APRF with 6 state-of-the-art methods in Section <ref>.
The performance on quantitative and visual comparisons confirms the correctness of our motivation and demonstrates the effectiveness of the proposed APRF.
It is also worth noting that APRF can deal with different projection settings, including different types of X-ray beams and the various number of projection views, which means APRF acquires high-quality CT images under various situations.
Moreover, we also thoroughly evaluate the performance of our APRF under different ablations and parametric settings in Section <ref>.
The ablation results demonstrate the effectiveness of our model designs, while the consistent preferable performance under different parametric settings suggests our APRF is a parameter-insensitive method.
Compared to state-of-the-art methods, APRF achieves superior performance and better robustness against various situations, which has broader application scenarios in practice.
§ CONCLUSION
In this paper, we observed that existing Implicit Neural Representation-based methods suffer from aliasing issues in the projection domains, leading to blurry results and severe artifacts.
Based on our findings, we propose a novel method – Anti-Aliasing Projection Representation Field (APRF), which aims to alleviate the aliasing errors in reconstructing CT images from sparse-view (SV) sinograms.
Specifically, APRF first employs line-segment sampling to estimate the distribution of projections within a line segment.
Then, APRF synthesize the corresponding sinogram values using center-based line integral.
After training APRF on a single SV sinogram itself, the internal regions between adjacent projection views can be modeled by spatial constraints.
As a result, APRF can synthesize the corresponding dense-view sinograms with consistent continuity and yield high-quality CT images via the re-projection techniques.
Comprehensive experiments on CT images demonstrate that APRF surpasses state-of-the-art methods for sparse-view reconstruction, producing better visual verisimilitude and fewer artifacts.
IEEEtran
|
http://arxiv.org/abs/2307.07422v1 | 20230708172155 | Can LLMs be Good Financial Advisors?: An Initial Study in Personal Decision Making for Optimized Outcomes | [
"Kausik Lakkaraju",
"Sai Krishna Revanth Vuruma",
"Vishal Pallagani",
"Bharath Muppasani",
"Biplav Srivastava"
] | cs.CL | [
"cs.CL"
] |
Explicit a posteriori error representation for variational problems and application to TV-minimization
[
August 12, 2023
========================================================================================================
Increasingly powerful Large Language Model (LLM) based chatbots, like ChatGPT and Bard, are becoming available to users that have the potential to revolutionize the quality of decision-making achieved by the public. In this context, we set out to investigate how such systems perform in the personal finance domain, where financial inclusion has been an overarching stated aim of banks for decades.
We asked 13 questions representing banking products in personal finance: bank account, credit card and certificate of deposits and their inter-product interactions, and decisions related to high-value purchases, payment of bank dues, and investment advice, and in different dialects and languages (English, African American Vernacular English, and Telugu). We find that although the outputs of the chatbots are fluent and plausible, there are still critical gaps in providing accurate and reliable financial information using LLM-based chatbots.
§ INTRODUCTION
Consider a freshman that has just started making personal financial decisions. They open a bank account to save up money and get their first credit card. They are given some seed money by their family and they also start earning by working on campus.
The student is encouraged by their support system to start thinking about saving into products like Certificate of Deposits (CDs) that earn higher interest. As the student makes a series of decisions in their academic and subsequent professional life, they need to make sound financial decisions and may look for resources online to assist them. An optimal decision needs to consider how the banking products interact with each other along with the changing needs of the student.
For users like this student, increasingly powerful LLM-based chatbots that have the potential to revolutionize the quality of
decision for personal finance are becoming available. LLMs have demonstrated tremendous potential across diverse domains <cit.>, such as natural language processing <cit.> and protein structure <cit.>, and have been claimed to show sparks of artificial general intelligence <cit.>. These models have been implemented in several applications, ranging from mental health assistants <cit.> to financial advisement <cit.>. In the finance domain, LLMs have been used to develop applications such as fraud detection, risk management, and financial forecasting <cit.>. They have been used to analyze financial data, predict stock prices, and generate automated reports. However, with the advent of recent models such as OpenAI's ChatGPT, Google's Bard, and BloombergGPT <cit.>, a comparative chatbot study is needed to evaluate their ability to be financial advisors. In this paper, we present an initial study of ChatGPT and Bard in providing personal decision-making for optimized outcomes.
It is widely known that LLMs based systems have unique limitations.
For example, they may struggle with common-sense reasoning tasks <cit.>, encounter challenges when handling symbols <cit.>, and are susceptible to hallucinations <cit.>.
With this work, we make the following contributions:
* identify a personal financial planning scenario involving a series of tasks (plans) and optimization of decisions.
* show how leading LLM-based chatbots perform in them and analyze their behavior.
* lay out challenges that future chatbots in this area should overcome to provide trusted financial recommendations.
We thus highlight the potential and limitations of current LLM-based systems - ChatGPT and Bard - in their role as financial advisors. We included all the queries posed and responses from both ChatGPT and Bard in our GitHub repository[https://github.com/ai4society/LLM-CaseStudies/tree/main/Finance] along with a few snapshots of the actual conversations.
§ PERSONAL FINANCE USE CASE
§.§ Setup: Tools and Procedure
§.§.§ Chatbots Tested
* ChatGPT: ChatGPT <cit.> is an LLM-based chatbot created by OpenAI that was trained on large amount of text data from the internet, including books and articles. ChatGPT is capable of answering questions, generating text and converse with users in a natural way. It can also learn from users and adapt to new information.
* Bard: Bard <cit.> is an LLM-based chatbot created by Google that was trained on large amount of text data and is capable of generating human-like text in response to user prompts and queries. Like ChatGPT, it is also capable of conversing with users about wide variety of topics in a natural way and adapt to new information.
§.§.§ Product Interaction Categories
Product interaction refers to interaction between different products like Credit Card (CC), Certificate of Deposit (CD) and Account Balance (AB).
Each product has different quantitative properties. For example, credit card due, limit line and billing cycle are some of the properties that would provide credit card information (not private information) of the user. Different properties pertaining to these products are:
* Purchase Amount (PA): It is the amount spent by the user on purchase of a product.
* Billing Cycle (BC): It is the billing cycle of user's credit card.
* Due Amount (DA): The amount that is due on the user's credit card for the specified billing cycle.
* Credit Line (CL): The maximum amount that user could spend using their credit card. If the amount spent exceeds this value, the credit card company could charge additional interest.
* Cashback Percentage (CP): The % of amount which will be returned to the user in the form of cashback on buying furniture using their credit card.
* Account Balance (AB): The amount of cash present in user's personal bank account.
* Annual Percentage Rate (APR): The APR is charged if there is due on the credit card after the due date. Some financial institutions choose to charge a late fee if the minimum due (MD) is not paid. It is calculated by the formula, Daily Period Rate (DPR) x Billing Cycle (in days) x Average Daily Balance (ADB).
* Certificate of Deposit Percentage (CDP): The % of interest accumulated on the cash deposited by the user in the form of CD.
Based on different combinations of these products, we classified the queries into 4 categories. These four categories along with the queries posed under each category, the variables used in each query and the constraints the chatbot has to take into consideration to make a sound recommendation are shown in Table <ref>. In the CC category, we considered a different dialect of English called African American Vernacular English (AAVE) and Telugu, one of the well-known languages from India, to observe how the chatbots handle queries in a different language or dialect.
§.§ Findings
In this subsection, we present the findings from the interesting (and sometimes insightful) conversations we had with Bard and ChatGPT.
§.§.§ Differences Between the Chatbots
Table <ref> shows the differences that were identified between Bard and ChatGPT when queries listed out in Table <ref> were asked. We compare these models on various criteria related to their performance in answering queries. The criteria include accuracy, utilization of user information, personalized suggestions, use of visual aids, bias in recommendations, provision of multiple response drafts, learning from mistakes, and understanding of different dialects and languages.
§.§.§ Error Categories
We identified some limitations / errors in the responses generated by both the chatbots and classified them into the following categories:
* Lack of Personalized Recommendations: When the agent makes a generalized recommendation without using all the information provided by the user, we consider this as lack of personalized recommendation.
* Mathematical Errors: We consider errors like rounding errors, calculation errors, etc. as mathematical errors.
* Perceptual Errors: When the agent misinterprets information given by the user or makes assumptions on unknown data, we consider these as perceptual errors.
* Grammatical Errors: We consider typos, grammatical errors, etc. as grammatical errors (we encountered these errors only in Telugu text generated by ChatGPT).
* Lack of Visual Aids: When the agent doesn't use visual aids like tables, graphs, etc. in its response, we consider these as lack of visual aids.
Table <ref> shows the percentage of queries for which the chatbots exhibited each of these errors. We also list out the individual query identifiers. Qi denotes the query identifier as previously defined (and also shown in Table <ref>). ABi and ACi refer to the corresponding Bard and ChatGPT responses respectively. 'i' denotes the identifier (number). Figures <ref> and <ref> show the response generated by Bard and ChatGPT chatbots respectively. For this one query, Bard made use of a table (though it misinterpreted user information) and ChatGPT did not.
§ DISCUSSION AND CONCLUSION
The application of language models in the finance industry has witnessed a surge in recent times due to their ability to process vast volumes of unstructured data and extract valuable insights. This paper delves into the performance of two prominent language models, Bard and ChatGPT, within the finance domain.
We also find the following challenges in evaluating LLM-based systems for finance domains:
* C1: Changing nature of answers for the same question. How does one create reference test cases since the answers change over time?
* C2: Inability of the chatbots to do numeric reasoning
* C3: Presenting results with easy to follow graphics.
* C4: Support for languages used by customers from different population groups. We considered AAVE - (African American Vernacular English) and Telugu, an Indian language spoken by nearly 100m people world-wide.
* C5: Evaluation the response of users from a diverse set of background. We only considered college students in this study.
C1 can be mitigated by carefully cataloging questions and system answers by identifiers that account for changing behavior over time. For C2, integration with numeric solvers like Wolfram may help <cit.> although this makes the systems non-learnable over time. For C3, different data presentation strategies need to be tried. For C4, the LLM models or the chatbots need to be enhanced. For C5, more experiments are needed with inputs carefully modeling the characteristics of the different user groups. These are just preliminary challenges and we expect them to grow as more researchers will try LLM-based systems in complex and diverse application scenarios.
While our study only comprised thirteen queries, we meticulously selected them to cover various categories of credit card finance. However, there exists ample scope for more extensive testing of these chatbots by expanding the number of queries under each category or including additional categories like student loans and stock purchases. By doing so, we can gain a better understanding of the efficacy of language models in different financial domains and improve their functionality in real-world scenarios.
|
http://arxiv.org/abs/2307.03926v1 | 20230708075122 | Enhancing Room Security and Automating Class Attendance Using ID Cards | [
"Shravan Bhat",
"Nithin R",
"Pranav S"
] | cs.CR | [
"cs.CR",
"cs.HC",
"none",
"J.7"
] |
Enhancing Room Security and Automating Class Attendance Using ID Cards
Shravan Bhat – 171EE240, Nithin R – 171EC131, Pranav S - 171EC135
August 12, 2023
========================================================================
With the rapid advancements in technology, automation has emerged as the future of human endeavors. From simple tasks like attendance management to complex security systems, automation has the potential to revolutionize various aspects of our lives. This research paper explores the implementation of a method aimed at enhancing room security in hostels and automating class attendance using ID cards. In this study, we propose a system that utilizes the unique identity information stored in ID cards for various security and check-in tasks. By integrating RFID (Radio-Frequency Identification) reader technology, GSM modules, Node MCU, and Arduino, we create a comprehensive solution. The RFID reader scans the ID card, extracting the relevant information and verifying the user's identity. The data is then transmitted via the GSM module to a central database, ensuring real-time monitoring and security measures. Moreover, the system also enables the automation of class attendance. By utilizing the same ID cards, students can simply tap their cards on a reader placed in the classroom. This information is recorded automatically, eliminating the need for manual attendance taking and reducing errors and time consumption. This research project highlights the practical implementation of ID card technology to enhance room security in hostels and automate class attendance processes. By leveraging the power of automation, we aim to streamline administrative tasks, improve security measures, and optimize efficiency in educational institutions and other relevant settings.
ID card, RFID reader, GSM Module, Node MCU, Arduino
§ INTRODUCTION
Security and privacy is a basic need for any human being. India's population has been increasing exponentially since 19th century. Hence student intake for colleges has been increasing every year. Automation would help in trivial tasks like taking attendance, or making payments in a locality. Privacy and security is also an issue in many colleges. Adding layers of security to rooms and safe box would prevent petty theft from happening.Main motivation of this project is to establish a attendance system within our college campus, a cash-less payment system and also to implement safer and key-less room locking systems in our university.
§ LITERATURE SURVEY
§.§ Survey of State of Art
Smart card based door lock systems which are expensive and less secure are currently available like the NFC (Near field communication) cards which are used in the hotel rooms. Using these might be very expensive as it requires complex hardware.Automated attendance are available, which uses finger print as the ID, But implementing that on a large scale like college is difficulty and would come out to be rather expensive.
§.§ Features
* RFID card and RFID reader is included in the door lock system. The door unlocks only when the authorized card is scanned and corresponding pin in entered using the keypad provided.
* The locking and unlocking of the door latch is implemented using servo motors, stepped motors and gears.
* When a card is scanned an alert SMS is sent to the registered phone number and also an alert notification is generated in the app. When an authorized card is scanned without the user’s consent, the user can shut down the system by sending a message from his phone.
* The same RFID card can be used in classrooms as a check in attendance system
§ DETAILS OF IMPLEMENTATION
§.§ Components Used
* Sim900 GSM module
* Arduino Uno
* MFRC522 RFID reader and RFID cards
* Servo motors, stepped motors and gears
* 4*4 keypad
* Buzzer and power adaptor
* Node MCU
* LEDs and resistors
* I2C LCD display
§.§ Working
Smart ID card is divided into 3 sub-systems1) Security System2) Payment System3) Attendance System
* Security System The RFID reader communicates with the Arduino through the SPI protocol .The I2C
LCD communicates with the Arduino through the I2C protocol. The keypad is connected to Arduino. The 4X4 keypad has 8 connections but the last column of keypad is not required. We only require numbers for the password.For powering the SIM900 module, 5V, 2A power adaptor is used. Once the SIM900 module is powered, the power light will light up and on pressing the power key, the status led lights up. Then the phone is paired with the module.
GSM Module:GSM is a mobile communication modem; it is stands for global system for mobile
communication (GSM). It is widely used mobile communication system in the world. GSM is an open and digital cellular technology used for transmitting mobile voice and data services.GSM module is used here since it can communicate with a mobile and the data which it receives can be processed and sent to the Arduino.I2C Protocol:I2C is a serial protocol for two-wire interface to connect low-speed devices like microcontrollers, I/O interfaces and other similar peripherals in embedded systems.
* Payment System:The RFID reader communicates with the Node MCU through SPI protocol. The Node MCU is connected to a web server where the
data is stored. When the RFID card is scanned and the pin is entered , the balance amount is
displayed on the screen.Node MCU:This device is used instead of only Arduino UNO because Node MCU has a wi-fi module which can
be connected to the web server.
The ESP8266 can be controlled from local Wi-Fi network or from the internet (after port forwarding). The ESP-01 module has GPIO pins that can be programmed to control device/ execute a code through the internet. The module can be programmed using an Arduino through the serial pins (RX,TX).
* Attendance system:When the the ID is scanned on the RFID reader, the student name that is stored in the RFID card is printed on the serial monitor. It is made sure that the can't be registered twice by comparing it with already registered IDs. An external app is used to store the output from the serial monitor. The output can be saved on to the computer.
§ RESULTS AND DISCUSSIONS
§.§ Security System
The Door Lock security system was successfully implemented. When an authorized ID card is scanned onto the RFID reader and the correct password is entered onto the keypad, only then the door unlocks when the servo motor turns. Consequently a message is sent to the owner saying that the door is unlocked. After few seconds the door locks back, turning the servo motor to the original position When the owner is inside the room, he/she can use a switch which is present inside the room to unlock the door. Subsequently after few seconds the door locks backs, turning the servo motor back to the original positionIf in any case a wrong ID card or wrong password is entered. The whole system locks down and an alarm is buzzed using a buzzer. A message is sent to the owner saying that there was an attempt to breach the security system.The security system fails to detect an intruder when RFID card's ID is changed to the owners ID. It will also fail if the owner is negligent, revealing the password to others.
§.§ Payment System
when a ID is scanned in onto the RFID reader, the value that is stored in the RFID, is sent to the server via WIFI module through internet on to the data base with the date and time which is taken from the internet. This stored value can be changed by the vendor or the shopkeeper to the new balance amount. The changed balance amount is then updated in the ID card through the WIFI module ESP8266backdrop of this system is that the balance can be changed to a wrong value giving a wrong balance
§.§ Attendance system
The attendance system was successfully implemented. When an registered ID card is scanned onto the RFID reader, the ID card number is send to the database through the wifi module Node MCU. The data base saves the student's name, ID number on the database. This present list can be retrieved from the database.As a fail safe for the above implemented method, the RFID reader reads the ID number of the card and compares it with the student register, if ID is present, it prints the student's name onto the serial monitor. An external app saves the logs of the serial monitor as text.This method would fail if some other student scans the card even if the owner is not present in the class. So the scanner must be monitored while the student is scanning on the RFID scanner
§ ACKNOWLEDGMENT
With immense pleasure we are presenting "Enhancing Room Security and Automating Class
Attendance Using ID Cards". As
a part of the curriculum of "Embedded Systems and Design" under the department of “Electronics and Communication Engineering, National Institute of Technology, Karnataka”. We wish to thank all people who gave us the unending support. We express my profound thanks to our Professor, Dr. Ramesh Kini M., And all those who have indirectly guided and helped us in the preparation of this project.
§
00
b1 How RFID Works https://electronics.howstuffworks.com/gadgets/high-tech-gadgets/rfid.htm
b2 Specification of ESP8266
https://randomnerdtutorials.com/esp8266-adc-reading-analog-values-with-nodemcu/
b3 Data sheet ARDUINO UNO
https://www.farnell.com/datasheets/1682209.pdf
|
http://arxiv.org/abs/2307.04550v2 | 20230710132923 | Gradient Surgery for One-shot Unlearning on Generative Model | [
"Seohui Bae",
"Seoyoon Kim",
"Hyemin Jung",
"Woohyung Lim"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
[
Gradient Surgery for One-shot Unlearning on Generative Model
equal*
Seohui Baecomp
Seoyoon Kimcomp
Hyemin Jungcomp
Woohyung Limcomp
compLG AI Research, Seoul, South Korea
Seohui [email protected]
Woohyung [email protected]
deep unlearning, generative model, privacy
0.3in
]
Recent regulation on right-to-be-forgotten emerges tons of interest in unlearning pre-trained machine learning models. While approximating a straightforward yet expensive approach of retrain-from-scratch, recent machine unlearning methods unlearn a sample by updating weights to remove its influence on the weight parameters. In this paper, we introduce a simple yet effective approach to remove a data influence on the deep generative model. Inspired by works in multi-task learning, we propose to manipulate gradients to regularize the interplay of influence among samples by projecting gradients onto the normal plane of the gradients to be retained. Our work is agnostic to statistics of the removal samples, outperforming existing baselines while providing theoretical analysis for the first time in unlearning a generative model.
§ INTRODUCTION
Suppose a user wants to get rid of his/her face image anywhere in your facial image generation application - including the database and the generative model on which it is trained. Is the expensive retrain-from-scratch the only solution for this kind of request? As the use of personal data has been increased in training the machine learning models for online service, meeting individual demand for privacy or the rapid change in the legislation of General Data Protection Registration (GDPR) is inevitable to ML service providers nowadays. This request on `Right-To-Be-Forgotten (RTBF)' might be a one-time or in-series, scaling from a feature to a number of tasks, querying single instance to multiples.
A straightforward solution for unlearning a single data might be to retrain a generative model from scratch without data of interest. This approach, however, is intractable in practice considering the grand size and complexity of the latest generative models <cit.> and the continual request for removal.
Unlearning, thereafter, aims to approximate this straightforward-yet-expensive solution of retrain-from-scratch time and computation efficiently. First-order data-influence-based approximate unlearning is currently considered the state-of-the-art approach to unlearning machine learning models in general. Grounded by the notion of data influence <cit.>, a simple one-step Newton's update certifies sufficiently small bound between retrain-from-scratch <cit.>. Nonetheless, those relaxations are infeasible to the non-convex deep neural networks (generative model) where the gap is not certifiably bounded and the process of computing the inverse of hessian is intractable. Several recent works also have affirmed that these relaxed alternatives perform poorly on deep neural networks <cit.> and even that on generative models have not been explored yet.
Contribution In this work, we propose a novel one-shot unlearning method for unlearning samples from pre-trained deep generative model. Relaxing the definition of influence function on parameters in machine unlearning <cit.>, we focus on the influence of a single data on the test loss of the others and propose a simple and cost-effective method to minimize this inter-dependent influence to approximate retrain-from-scratch. We summarize our contributions as follows:
* We propose to annul the influence of samples on generations with simple gradient manipulation.
* Agnostic to removal statistics and thus applied to any removals such as a single data, a class, some data feature, etc.
* Grounded by a theoretical analysis bridging standard machine unlearning to generative model.
§ GRADIENT SURGERY FOR ONE-SHOT DATA REMOVALS ON GENERATIVE MODEL
Notations Let D={x_i}_i=1^N⊆𝒳 be the training data where x_i ∈𝒳 is input. Let D_f ⊆ D be a subset of training data that is to be forgotten (i.e. forget set) and D_r = D ∖ D_f be remaining training data of which information we want to retain. Recall that the goal of unlearning is to approximate the deep generative model retrained from scratch with only D_r, which we denote as f_θ^* parameterized by θ^*. Then, our goal is to unlearn D_f ⊆ D from a converged pre-trained generator f_θ̂ by updating the parameter θ̂→θ^-, where θ^- represents the updated parameters obtained after unlearning.
Proposed method
Given a generative model that models the distribution of training data p(D), a successful unlearned model that unlearns D_f would be what approximates p(D_r), the distribution of D_r, as if it had never seen D_f. The only case where the unlearned model generates samples similar to x∈ D_f is when p(D_f) and p(D_r) happen to be very close from the beginning. Under this goal, a straight-forward objective given the pre-trained model approximating p(D) is to make the output of generation to deviate from p(D_f), which could be simply formulated as the following:
max_θ𝔼_(x,y)∼ D_fℒ(θ, x, y)
where ℒ denotes training loss (e.g. reconstruction loss).
Meanwhile, assume we could define the influence of a single data on the weight parameter and generation result. Then, unlearning this data would be by simply updating the weight parameter in a direction of removing the data influence. Toward this, we start with defining the data influence on weight parameters and approximates to feasible form as introduced in <cit.>:
Given upweighting z by some small ϵ and the new parameters θ̂_ϵ,z*argmin_θ∈Θ1/n∑_i=1^nℒ(z_i, θ) + ϵℒ(z,θ), the influence of upweighting z on the parameter θ̂ is given by
I_up,param(z) dθ̂_ϵ,z/dϵ|_ϵ=0 -H_θ̂^-1∇_θ L(z,θ̂)
where H_θ̂ = 1/n∑_i=1^n∇_θ^2 L(z_i, θ̂) is the Hessian and is positive definite (PD) by assumption.
By forming a quadratic approximation to the empirical risk around θ̂, a data influence on the weight parameter is formulated as a single Newtons step (See details in Appendix of <cit.>), which is consistent with the objective we have mentioned in Equation <ref>. Although numerous works have verified that this data influence-based approach works well in shallow, discriminative models <cit.>, we cannot apply this directly to our generative model due to intractable computation and lack of guarantees on bounds.
To address this problem, we re-purpose our objective to minimize the data influence on generation. Grounded by recent works <cit.>, we find that we could enjoy this on generative model simply by diminishing the gradient conflict as follows:
Reducing the influence of samples z∈ D_f in training data with regard to test loss is formulated as:
I^'_up,loss(D_f,z') → 0,
which is equivalent to
∇_θℒ(z',θ̂)^T ∑_z ∈ D_f∇_θℒ(z,θ̂) → 0
where z'∈ D_r in our scenario.
Informally, we could achieve this by alleviating the conflict between two gradients ∇_θℒ(z',θ̂) and ∇_θℒ(z,θ̂), resulting in diminishing the inner product of two gradients. This reminds us of a classic approach of gradient manipulation techniques for conflicting gradients in multi-task learning scenario <cit.>. Specifically, we project a gradient of forget sample x_f ∈ D_f onto normal plane of a set of retain samples x_r ∈ D_r to meet ℐ_up,loss(x_f, x_r)=0. This orthogonal projection manipulates the original gradient of forget sample 𝐠_f=∇ℒ_f to the weight parameter to which sufficiently unlearns a sample x_f ∈ D_f: g_f = g_f - g_f ·g_r/g_r^2g_r. Then, the unlearned model θ^- is obtained after the following gradient update: θ^- = θ̂ - ηg_f.
§ EXPERIMENTS
We verify our idea under numerous data removal requests. Note that measuring and evaluating a generative model to unlearn a single data is non-trivial. Even comparing pre-trained generative models trained with a particular data over without simply by looking at the output of training (e.g. generated image, weight) is intractable in case of a deep generative model to the best of our knowledge <cit.>. To make the problem verifiable, in this work, we experiment to unlearn a group of samples sharing similar statistics in the training data - either belonging to a particular class or that has a distinctive semantic feature. In this case, one can evaluate the output of the generation by measuring the number of samples including that class or a semantic feature; a successfully unlearned model would generate nearly zero number of samples having these features. Although we are not able to cover unlearning a single data in this work, note that in essence, our method could successfully approximate the generative model trained without a single data seamlessly, and we look forward to exploring and adjusting a feasible evaluation on this scenario in the near future.
§.§ Experimental Setup
Scenarios
We unlearn either a whole class or some notable feature from a group of samples. In the experiment, we use a subset of MNIST <cit.> with samples of classes 1,3,8
and 64x64 CelebA <cit.> to train and unlearn vanilla VAE <cit.>.
Evaluation
We evaluate our method under the following three criteria: a privacy guarantee, utility guarantee, and cost. Privacy guarantee includes feature ratio ( fratio), a ratio of images including the target feature (See details in Appendix <ref>). Utility guarantee includes Frechet Inception Distance (FID), a widely used measure for generation quality. Cost includes a total execution time (Time) which should be shorter than retrain-from-scratch. A successfully unlearned model would show near-zero on feature ratio, the same IS, FID score as the initial pre-trained model (BEFORE), and the lowest possible execution time. Given the legal impact and the goal of unlearning, note that guaranteeing privacy is prioritized the highest.
§.§ Result on Pre-trained Generative Model
Quantitative Result We run the proposed method on pre-trained VAE to remove unlearning group D_f (e.g. class 1 or male, respectively) and evaluate them as follows (Table <ref>) Starting from the pre-trained model (BEFORE) our method unlearns the target D_f with a large decrease on fratio by 65% to 70% while keeping the time cost of unlearning ≤ 5% of retrain-from-scratch.
All the while, our method still keeps a decent utility performance. Comparing the baselines, our method shows the best in privacy - the prioritized metric - through all experiments. Note that the feature ratio of gradient ascent in the CelebA experiment (feature ratio-CelebA-Grad.Ascnt) was omitted because the generated samples are turned out to be noisy images and thus the evaluation result of pre-trained classifier cannot be accepted. Also, note that although baselines show better performance in terms of utility and cost, they don't show near-best score on privacy guarantee.
Qualitative Result
We further validate our method by comparing the generated images before and after the proposed unlearning algorithm. As in Figure <ref>, no class 1 samples are observed after unlearning class 1, meaning that our method successfully meets the request of unlearning class 1, which aligns with the quantitative result where the ratio of samples with class 1 is reduced from 34.3% to ≤ 15% as in Table <ref>. The output of image generation is fair where 3 and 8 are decently distinguishable through one's eyes, although it is certain that some examples show some minor damaged features, which are in the same line as a decrease in IS and an increase in FID score. Note that the ultimate goal of unlearning is to meet the privacy guarantee while preserving the utility of pre-training, which are remained as our next future work.
§ CONCLUSION
In this work, we introduce a novel theoretically sounded unlearning method for the generative method. Inspired by the influence of the sample on the others, we suggest a simple and effective gradient surgery to unlearn a given set of samples on a pre-trained generative model and outperform the existing baselines. Although we don't experiment to unlearn single data due to a lack of ground evaluation on the uniqueness of the particular data, we leave it as future work emphasizing that our method could also be applied to this scenario. Furthermore, it would be interesting to verify our ideas on various privacy-sensitive datasets. Nonetheless, our work implies the possibility of unlearning a pre-trained generative model, laying the groundwork for privacy handling in generative AI.
bishop1992exact
goodfellow2013multi
fu2022knowledge
liu2021federaser
gupta2021adaptive
bourtoule2021machine
zhang2022prompt
icml2023
§ EXPERIMENTAL DETAILS
§.§ Setup
Architecture
In this experiment, we use vanilla VAE <cit.> with encoders of either stack of linear(for MNIST experiment) or convolutional(for CelebA experiment) layers. Although we verify our result on VAE, note that our method can be applied to any variational inference based generative model such as <cit.>.
Baseline
We compare our experimental results with the following two baselines. One is a recently published, first and the only unlearning work on generative model <cit.> (FU) to unlearn by feeding a surrogate model with projected latent vectors. We reproduce FU and follow the hyperparameter details (e.g. unlearning epochs 200 for MNIST) as in the original paper. The other is a straight-forward baseline (Grad.Ascnt.) which updates the gradient in a direction of maximizing the reconstruction loss on forget, which is equivalent to meeting e.g. Objective <ref> without gradient surgery. Note that we keep the same step size when unlearning with these three different methods (including ours) for fair comparison.
Training details
We use Adam optimizer with learning rate 5e-04 for MNIST experiment and 1e-05 for CelebA experiment. We update the parameter only once (1 epoch) for removals, thus named our title 'one-shot unlearning'. All experiments are three times repeated.
§.§ How to Evaluate Feature Ratio
We first prepare a classification model that classifies the image having a target feature from the remains. In order to obtain a highly accurate classifier, we search for the best classifier which shows over 95% accuracy. In the experiment, we use AllCNN <cit.> to classify class 1 over the other in MNIST with 1,3,8 (MNIST381), and ResNet18 <cit.> to classify male over female on CelebA. After unlearning, we generate 10000 samples from the generator and feed the sample to the pre-trained classifier. Assuming that the classifier classifies the image well, the prediction result would the probability that the generated output contains the features to be unlearned.
§ DEFINITIONS AND PROOF FOR THEORETICAL ANALYSIS
In <cit.> and <cit.>, an influence of sample z on weight parameter is defined as the product of its gradient and inverse of hessian. Moreover, an influence of sample z to test loss of sample z' defined in as following:
(Equation 2 from <cit.>)
Suppose up-weighting a converged parameter θ̂ by small ϵ, which gives us new parameters θ̂_ϵ,z*argmin_θ∈Θ1/n∑_i=1^nℒ(z_i, θ) + ϵℒ(z,θ). The influence of up-weighting z on the loss at an arbitrary point z' against has a closed-form expression:
ℐ_up,loss(z, z') dℒ(z',θ̂_ϵ,z)/dϵ|_ϵ=0
= ∇_θℒ(z',θ̂)^⊤ H_θ̂^-1∇_θℒ(z,θ̂)
where H_θ̂1/n∑_i=1^n∇_θ^2ℒ(z_i, θ̂) is the Hessian and is positive definite (PD) by assumption on convex and Lipschitz continuity of loss ℒ.
(Theorem <ref> from Section <ref>)
Reducing the influence of samples z∈ D_f in training data with regard to test loss is formulated as:
I^'_up,loss(D_f,z') → 0,
which is equivalent to
∇_θℒ(z',θ̂)^T ∑_z ∈ D_f∇_θℒ(z,θ̂) → 0
where z'∈ D_r in our scenario.
The second-order influence of D_f, ℐ^(2)_up, param, is formulated as sum of first-order influence ℐ^(1)_up, param and ℐ^'
_up, param, which captures the dependency of the terms in 𝒪(ϵ^2) on the group influence is defined as following:
ℐ^'_up, param(D_f,z') = 𝒜 H_θ̂^-1∑_z ∈ D_f∇_θℒ(z,θ̂)
where 𝒜 = p/1-p(I-(∇^2 L(θ^*))^-11/|𝒰|∑_z∈𝒰∇^2 l(h_θ^*(z))) (from <cit.>).
The influence of samples in D_f on the test loss of z' can be formulated as:
ℐ_up, loss(D_f,z') = ∇_θℒ(z,θ̂)^T ℐ_up, param(D_f)
which can be equivalently applied to all orders of ℐ including ℐ^(1), ℐ^(2), ℐ^'.
Then, ℐ^'_up, loss(D_f,z') = 0 is now reduced to
∇_θℒ(z,θ̂)^T 𝒜 H_θ̂^-1∑_z ∈ D_f∇_θℒ(z,θ̂) = 0
which satisfies the right-hand side of Theorem <ref> where 𝒜 and H_θ̂^-1 are negligible.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.